var/home/core/zuul-output/0000755000175000017500000000000015130372352014526 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015130377425015500 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000273540115130377347020274 0ustar corecoreaikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD %/lCڤ펯_ˎ6Ϸ7+%f?長ox[o8W5N!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF֏]dP>Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/P_]F@?qr7@sON_}ۿ릶ytoyמseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1s2tStΆ,~Lp`-;uIBqBVlU_~F_+ERz#{)@o\!@q['&&$"THl#d0 %L+`8zOҚƞ`wF~;~pkѽ)'cL@i]<ք6ym®Yi&s`dyMX](^!#h k:U7Uv7쿻чd)wBKedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזL +?lm$K/$s_. WM]̍"W%`lO2-"ew@E='h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2'[LR"w F05N<&AJ3DA0ʄ4(zTUWDdE3̻l^-Xw3Fɀ{B-~.h+U8 i1b8wؖ#~zQ`/L 9#Pu/<4A L<KL U(Ee'sCcq !Ȥ4΍ +aM(VldX ][T !Ȱ|HN~6y,⒊)$e{)SR#kהyϛ7^i58f4PmB8 Y{qeφvk73:1@ƛ.{f8IGv*1藺yx27M=>+VnG;\<x7v21՚H :[Γd!E'a4n?k[A׈(sob 41Y9(^SE@7`KIK`kx& V`X0,%pe_ן >hd xе"Q4SUwy x<'o_~#6$g!D$c=5ۄX[ു RzG:柺[ӏ[3frl ô ހ^2TӘUAT!94[[m۾\T)W> lv+ H\FpG)ۏjk_c51̃^cn ba-X/#=Im41NLu\9ETp^poAOO&AtsclW#&sľ']=3=XH$lIpS,Ի*"onEoJ-|.1٦@ږJQ&շy»QU5ٲbkm@ B̠Yߒ`rH&ކE|@$XXg1/ 4s}d0!ֵ9lX #QYd.x3'sJHt**feYŇ{\S={7CfN:XCJFYE10]@ 2O=d.h, :<%uab`~uu}Y{xWyEZNg4Ì eI]UtIn*^'6{cW3Ͱ,-'ho$VXi.;[ "w YbI.z܀e4[:eyA@)[‹pT_=YtPɜ :`X;_4<7U\䈌g=QJ) k L7 >9hX'AXw:|e{=,p~ͼбvExkW\- ˍ4$%k! }"pq<xwZAgGs;fkp:$&RUUGՇ)rwg#%w)%ch0<q'TѪosoUv|6X;3i}:h4HKj?\ŋaZޗE$;#`6 )1 >;RLKy.:LI={B1Ss虻*aݷCww8wEYnpEF0g$8Jq_l\dM"qO]3D,k2l#UQk*ex>`6RnO~f&!9i:9f Y5C:ªXeF adpEgȒwiuzz ĸ@9d75=\HGR)H ͪ@i &ڤ "&xB8e钄%$I~gU{V5$1. Am/ǙWMY8уd7O(`Y/, 5;pvEXKM.{7OV=Es֎x }V0OF!@{`ݥDiU %\0veT.P%k3zkDRq4^|h>.m-ﳂDP` ze:g h4Iqst7 ! c]M k Cs9B%yhNɀ|hyW [4v%;:g>uѨz[NeJYźywنu=.^d i͚\DꗜLȻEEX>9GZb..5]%WVJ=e̛PsٛH / !z"U28P޿:xl#7/+< XSQ;**NHM YJlN{ @.8Y+`\Hw3dTPEK$ygs'Ϙj)ǛViPOnfjv8x~V}{꫼0d 1tGՂ)*t1U˖YzJ(~kUºc/M5@Pb."+'|8٭]d'ŨՔnH忹MMռȺ:BK}Mx3Lmc`\k EB5d|~֌|~~cUydψ $ s{8hw#[DqB6':;ݑ۔lkrMd}D%Ɉ=wjJ+|rN)_i:Qj9<O֒j"0OI+b=uّ*Y]y ѱ:/jr;<`/1tRkL7%cهG/!#k`71yItޝPh DϺ1D 8 %m?قs<ȫ;suoG%=OF޽ȲQ |軚Z >jr>&+M`vUЬ붝mg/m;>Qڣu ,%k,:-25c4bmI? -]3_=WUnmFl}bk $*Z k S$at|,كsCH^A=5)t5߫o;jk{C T2HH;P#;99֪ġ}[ Qm4^ެZGf"L±lqLdm^dkMMu흴u>Ok:>7!IXg},EFIUʬhI&6wzUv7]%)oQDr=k0^.FUiaJtQ:qsY,XXuL^z!X_*AAT+2wfx-y"Z/2{":pqU a}5 3Nmhkc]~7N2O8n-?2_eH]3-辽auaS7mվnU=,NY&]$9[no,1XΚ[hy/̒8$419>:9;x۾U; ޵o';)8Ve3^ZU˽ jɰ&-H;:өϓQ{WBI;h6h3ʀKfkE k˾V+itU1Dt9Uİ< Ty## sW|A=Nl <^_Xp:X JG]xǍ_x}k ^}L{Zsc#cf/lpuO䁀4LVwfб }7'u#T4XsήƋyDoWcƏ=q|y߆9;º ]o:VOBRu  qyh˃7uҏӥQ`(L")WeY}-6ȱ 9/+T4H{#@;S]o[ar\7^7ms)gɻ$oݭ`A dtK}&XxM!xú~>=髻Q0a`fق҉5%>\X %BSi2^~lJ|z_|V*eMc(7 5ɤO$lF"]i3͚%M~Ӿ%cم`wT<4EӾNqL2H/2JF"EcNM4M`ڔä> Gx1l`{{J(hjnʵ&jn+=x>t]b"wW>;99vVnk@0|%_}ng.ŽBJOM?ӻ_nSs9z߹mzRګ$ܱ&Qj^T#LO ^Ndi -4hГ']#+CT4l|8 H|8JV-=du]NJ¯>.c[[''=6i M4!kz<)a},KQ:ɪ`U@BY'ѐcSh4{@A'aՖQ_~=r2g 1R1qJB0oXJP)@V$΃"J?n?lEn2 +~t +|4;u^)3-Mfrr1Lkb{(!dH/d>, P.Dfv=PMl ܓ|kQ| W}^b7vUWzOnEg\pS Ɨ$"yK(S2&<~cVѢ)Y[ϷQ ۊ*@:\/j!vHoաԮ ԏ\cCb4uEczXS&߮$S/ Q Y R_xi6wͬl"m``̢S[ޥMDMTR` J ?SR[#_\4%Z:*A V'㽼E\xiXtr5KCϛ{OQ?E\4zt~V$z׃%]WD΃d-¥U]v+Ķ=ϻ7Pz@BBx\?/J 5dMH46\_s,4>*$O›Hq8·fz5Gt`r7?閛FHj$!OkέMk%.;ζBH_=cƶ52R.)L)pŰl+ a[s7 Iz Ǫzd;-s~K7M>%} -Of;M.~P 8'k01Ѥ1HIa6Pn{/2ΏL+ΆhBUx5|T!F[|өSfFH.İd/D!-Ɩ:;v8`vU~Il2;VI]|Lu>$X(6 b ?u==O!;(>hǖVa[|oiya+CTm>C9|H iHe"j.S֔(*Cj!);Sak*ep~K1 v']7/.7 !ې: %ƶ(f갱/p  |T!|ik3cWW/ @a#ӸvZ{Ibi/b;u8IRXAV{ύԦٖwŅjIL{3#iyy >Vc11*Y0\N*HƽŇKoA`d;ɯw"O-]J"ȜI*DۂgؗN^saͭ̍*tPM*9aJ_ 3IVnס|< aUd⧘pvzz0V fN:ǖ9dɹt^dnJna) H _KӆX#rrE#r?uQ { xRF(߯y? jO]5_C!l]>a55[c&-W`a}TQX&mw*Ǫn\7{ctm,c%jP˃m )lwۨKqu!*ottonY77ܩJ==\J=]?Ww?¯8nq~q?A-?T_qOq?5-3 |q |w.dަ'/<_ DVi^9 hxh2 Iz b.E)͢Q l1:YɊ",8'`*>q/E :Xd,RLW"Id9JogT\1f3@KuJ&@B x,A k ޒd [Yj-Ah1T9!(*t 0gb@񺱥-kc6V'“5huՂUmpa.% qZBh]Q; 'd:|ؒ3$".meO>Y?HELkYZP=8YAc| w#Dr) "h l`2@K$`#NXtJ^ zDpC6-]K[r0Z;`^ˁ-G$\~%Q;e{/d ^ ޒg0uE~ۊ$q9`尻]T#CJ1Ǐ9?M8]o2seXVt=ev!`JU#y8B*kM0{'\ 2n[{!fRБBmLaKfKywdb񱍠z{(.>LC,HI~'./bKjoJdpH UDp.cj|>z '` |]}4:q!G`G qBPu(DihU9P!`NHɩ݉S-^pşCx$BBRoJ@ѥuȑz.#&UݠmF̤@U8$ M6MY0/r: *s5xgs͙$ԙy#Ejl1#XX۾;R;+[E&Xi>eIi5lݍ )`8dM-}\\%.}T@ iS*XK!?\+ xJpΕ`p~mΖK Wu7Pll{f_WJp)h9U|A܌%`; TnpR4޷V+vy]/ϧ]+GЕ5҇#t~T)=UFEnvD8cRөcp6Rcc6:$[|038F*0-)ZyT10:U[tp޴}{~Y(f 4[m6F"roe5$!;VfBs˞ޝ4cc1ۀs`*f'r[8ݝYvjҹJ+0v yg[~)5 [j+Ag"pZ:"ka]+n!e߭lɹ$k'9~ J>0E8bJDƖ|e=rv:0e7>& 2ovN21cEdA Od[=jlV#XJ|&+-T1m8NP٤KX)tr:mDWx"8B*4*X FQG>^6 vq!EwQű&؁64Ĥj9| Οڭ:kg wa`e[GX$"JX!8j0"| \56cdʰHdX?"}B= -/%!C`@ шv1\h):=m%랹m RD3Q{]pcfՅuБ:A ѹ @˿ޗ~7e3tj>Y)"\**vdP=I6p;bck[ RhT#N0d5+A>ΰ-8sѹ Ve掟^ CZQc~j\b8$4kJ^آX/ 2z .}'1"+L=$ÅjuƖ},X n*[hp9 n`g.  RG-m~\y[j_;3\弁^bD5p-^〩:w}[ą8dBմVsrAJsT=~#0t.P*2V q%so#r|.v\sfa:\X%;3Xl; ՈC.5Wg󵸊y!1U:pUC4Cmp-7t]斻38ѮIWί_#z7u&Ӄcx-w+LX)w>^ʮٹUg:lR@djӓab u[kWw{7st28bJ0U1|z:9lX)zS&QsTomDvU`tiz5Ӄ~ 5yx `iݗE@Ubc@ ۾S6p{dMVwfa}/TQXȴ7Ij.WU}-I ux1^_VgϽeՠyq9Elq.E??ei29H9դ#<>WTlOLk{S2=Ϛr0˗e}<#S'Ȩ]F⁇ ѳ41M+i Wlws\;XspqO&W;^F)?Ӵ^/x|q>q<ۮ Ǻ|q*yM.H[98ܻn !Jxe(]?||!R;k9gn7Tl֠V(kP6/zy7栀X&rcnlf">n>6qeK{g)WV l m H99zWBāB粪E\u'>PnNggUGJáWpG1`MT>~Gl]~xwUNy> %S p\U$þ!\BucDv^?O3tYx7qxaAPja,B3uew}S\\r9o8cB F&Ǵ\{vw2^9'SV(&"P%SFR:#R8-Gqׄoeȩ7ĒRӛ:#_O#k]M?.MqӢZetV%WpRo~;t"0 D﨩eʯh !Α m@A ZK7mlZS[eX"Ğ ^q QI[zт`R: |ˉaPƔ=<00BPKSyəM:H)f^̯3T xx$E_C(~#5b"e"emxN j эqɣ M+lowF;E?<3!̥%ъEػix4DRֆ`YÃYQxVavuLf3ςᠸJva؝H7n>CXʶ,% Յ@54X/t/&xVDKx HR6*~EVClFT)T;MGRtdÌ hmTU%QkQpKbR͎ò#T}Do:J4--B뙎)[J7>\m)rj)sqt-t+}wx*m)߽o9,.ՕfOg6@K|?dӌW\r؝njӱ+Q@Lņ7Ķ`: R<).GW+_!bmI|(xtњo얣-̌*V  U8㝪V |WOч_zJь#Lƅ$SӠX1_wt!m0I IdLB3(N^%q8S^\&GWiW쮽JCNnCVBSKmѹl=vrK-\1՞_gg0ia)w6ǻz xˬX9u4KZ>.4"qGr*eU&Gt!$,:3郯>xX6~d"[ak 77~Վxٓ!l凷'^ߴ~|p@A:r- cW 7PHo;?}`=0`ً{~ȹ>'4}0K|ڛ ݯ(ؓ1M= Pܽ, @7ybv(̬0Cٓ~й ~~~ّ~0av?_xٓgIF(2ܴ,f.e *`bZ*gXv8V}2DN/sH=Ol<A$xleAn/ "fh Y 'b7x=PSg\@ ? Px w>j~D+ߵKNKvI!~-8g8dF(ʍOP8BDuzք"s>o>b ?s,7W4+R#<HS X6(FkVZy =BBDz!z$t\USBc۝5hV#RYз1|޵0mk鿂%|;qk4ER6%%;^O)ɪl1Ե,y8/NMOA滟[߶eC%ʌ I|[ykEEބ)\}?,0䣈6˯ttI?y *'U}.#:=4a~ˇ>b) L~M֔dq&JsWo8)JJ@ m|9G "1PWMRV' F);g&҉W#_ZZP,G{q .sSj퇷G~هs~~`,~4pݾZAK@IEJթ-/,f_l.T,?'h^ҹ{^  r~Mкg"E C k; :.>SL@,O_ܶ٫ +@'uÃpgH) x<] Qb:sww^h (G ` NZ]E S߇@t<P<:9OO?3=qhb&P#J32 ߮\/`qܗ#K> 7K`ko@} #~Yu,@Xb`l'n.rzP\WiBlwFяPw-@1P)Sw*(X! bȏH׏EAi],xMAkƬe–*Fܻavڽ0*vO¨K{4 ͙Q7Q֗:, :`,nf{zZ K0S y.e &e`fyF|ח7wg}q`aDP^k=_ ~1[vЭ _ef+l\ϡc7@[<=Vf+ b*$TK:cd*%Tcڽp:f5{b1d;K_^<~9 i(cJОMXk/ P~D1gW2?Qd s^] ?0We#ᖹ 9Nd.K!^ܥ ; ,fsU\%_灎z(91Ӥ0 /.y JgKT"fse*2ݧka*291ϢKXXʻ b*/u$k /<0Xn1Odf~밝!2/+{wq/οbfA{<i #E\lY$-I FUt*;cDm[j]?D, E|W'o"p(- qG2w9 Yހ%eX)AtԺ>j"Wŝ>1ܑydR3nbHͿz{\y2؀4{1̃Yp%mSxD쫙 +f0yW<pB81 Eό7:O15T61ᓿ\|<0yUqtXt:ulYVפ g"dQm%b\F!KbW#l![rtH~>|x3Th 1(F"h]pG4R "ML^E`85o`C1 xq8FV?gTkZ,G~(̲TVɚp I^,e00=1.DӁ o~9=7f벨_V_~AќX4з l\Eekd_A:Udt;&kn}No3nx+@ @C`FFĨР6w6FCV j3m ,ss=A"&J(׫q^d![8ʻrTҦ27hKՓW n ύPS}lЏǑP Fqr3C[1n궩;n⁲0y*Ca1l =chH=_7xlPq"Ϭ>+9Ks\-%,>ρEx 2K3lRsb6d HM@&e5ԡ"'Wco]pa{1[12AبjukR{gvUpaUA%7zʧO6pZR84 9#r髵Ӭ6Ȩe-2;1&c@*9:%8~.O^T^ߕM5tO 28R?$CNe a&4t9] $`M![ьj,Ç&7fK[j_iߤh y$ګGDY;jLu}9|`Ww Q]}9bEMLֹg}9 7P<{ F"N4܅blrmGI5 + "&^ݸГC$q$Yh,k* [El Y804vY{UkVusY(m-XC\гExviski'[KO hliao2llު1C}pR^|H ,FoL>w0ێzk틝Vb|o _ /BE%740ϠmX#Gjc\."C]E,"6=n_~?ȫ"}گ 5ۄ{jO4BGjCڐA58ck\v}>ul=.mSK O+}=ڄZ{jO4BGjA&ރP{Bj?P{B6:T Жi!Lb<-t4hDFT[|\]8cT 9X,CrD00"t~wh[S$D 2Caֈǀa4vfgI4 ә۠g|i\WHc!p{d߻>4X`!I'1 eh(TI 9gip^<)J҃V4ȅ)(OӸ|Rlܚ]4~fLp9O qz{?>g+lWeZ~s7g 'sAyǦkVS-)dΓVpM4f/,8.MUR(k!I+am%^=T$W)z(@1"45"<%a7rY ߗ#g)z tOL3]lYwm=iR۩s?rZ/l`9P.@И˕Hʻ,2XpSH`fK|̻5/)^uJSEt5H#ZW'ԧ|".$&Ä1}:n$[)ɔ/{k~&Uf^G{фG N# b 3x Ǎ9&y,^ΰ*O>?<8H;>5f})naPȐOUF0^=ǘGv'"rA '!j  wnlfiu;߻o˪=ah2zn[$΅{>y3vSЖY5XtHߘ}sAxhƐkK?lBḯs4K+T' + j/9ȇ4K\%ϢgK2 y# ڀ_.44Q˶`FmG3 .L ? \ଦfq|FI|)6wV|9EC8ĖJcQ۰lX|qT&e4Z n jtQ<3KXF#>M?&Ef|46ɬN$rZy#[잨ɗAwNWXZfĵ}\sG0\ēܷ>eĔ܆;6 PQ""mމQ%di$G$XX;xCJ#k8~?p4# =;ģ/C-cO@i,-b84h6g$%?Y8} 0?0 ޠR4\\op7,b'*/:9%"?g<|jy9o JgCM ݍw˶}n ޅTr޶9>bN99bH2y=b~,AIO3{{Nʹ+>gK!)Ym# -zöIդܖ 6k=wUMө"[D^5V%JJt mq8F"@^:շPCL{PU9ɓ_;vg "LN?ǿ> A'iwϾ}Bwvu OuadwwDoĿM~KGl&>BNAM=T&@}5-!|c͂mr O p #g C =%H<O}e~5POyl_&oRW<|\6S ݄5K8=8UHդ\PSLgkXrֳtRQmy1e:Qw 雂Wv^/c. [Kk|# őXUU_FN4:1 1*oN"vykTc5 SKH@.D A=6ȑ/jj*Ds1C@RD.# ,1*5wWr'pp%t'K{<.WBm9B"em` O"]X&\eecҕ uXmIptcql%-g-:p6X؈]pH?:6+":/qkSe'F f&Mܱ5l9xQVZ1e5҅/ٮ6$8f+͐'XLG/WXV,&/(kF(Hָy%x$X8A FRphʶ 1.;8mB;V²P58! 6 eA!-#fWN50.fD+PhL˶J'XKa\ԱE unk&茿}ԃ qi\["oOSX8a /_8Z1/6mCQ`òJ%`c}&Q= m:/QC&8AA(aȂKѴ_C: nI˱R%2FXX_= A_U4}ƨP:qRri>vF]) bqϋwťovZ_RR>KT011fKM;|̽h0,J=/bj)]8W2/KgA[#K>eHpcrT<{*Yk>KCTP e<Ƅ>e.% Ū&!Drwt(Έk3Vf{ k*H n b,}zT^$8g"|ґ'b7lI#v0.r~mGonyF_LMsJ (Q|rQ0-9HMrK2`1~+쭹abs_MJ!j- e2 ^7vޤ<xcp>sr&Td_ah0x1‹.ƹ\$idߦ]U}fGh>z3EbbKxѾ1ӊtB@6fʞ쒷"+elռ O3T^\cz`KXUig~PiWlQ6𓃖\HLzp/h/}AԳ~颿$0=91tviz~*q:ߕ$!Yo@Xpo\Yî&.[38: 4KR7 d 48-wY,X7a1vYk2Y+[!JS뚆%Ӛ->%vxfsnƶFl !er`]<,Ę!?\JiHY:ω!PBPJ[Y5L(2Ʊ'ub7э7 g= 2CUNDi_ddL;6'ꦓ:jϋN{!lT:[pZb ͟y=zf}2vQHh3ytpRvw$8OG:P$81l9Wzr+З $8L,2,XF+U [-V_5 x cp (qDK' U1c8ô,y9S\!jl/[ln_ v졘ZZ-ke,6 )EKC'uaQ~\eQ VߚlD8'ʈӞ`ZN9Ur30r39!]:="3ڽZ?uyAV+S5кIŚ31%"O!gc,&zklJ +ִ*R\$FVb0fݭrz{ˣ+>RtfyM!~̼&Jf3jtIK|aUHe;"d5UdE>8x0HFjk*vۛRު2tK+U,'[~#)F98YOyK֗X={^ɮ.ai/+l}`\hFJ8m׎kw3%tKMsKam*"i1̫,sGN$41f雼;|OtPOW\YáEi aJaM}-v mK][! '5 Mfu>މY>-Vd+ [X<*Rb3Xk fF{^gT2.v[Hp,c?WkM!4w,/_ׄ GqB"KH}Cc. cfYwĝ`Ϲe3i)/4׃MG!U) FLBz"ƝyM8ijT1"ϛ% f,tZ*bTX0H3 f}έG 9.1-+OzOEi4".hC8r9#+ŧS/" RC+QEb8 )n2>} 6dp;tW?EnadcwQu Ǻxt7>/wm, ih5+P+ԚBG@~2cwڍ7F "Y.Ʉaoݬ:A|;3/R'ZK&m=BTpX^"G#0$!DuGd)70qStnz eS||y4M(s*7_'oN\},4=3J!|kOPQmjLjZ I59ɚ%qhK @iV3MK GaF6,pKYP2q6[fMᘧ8 uu^6PEĽUT- jUNUZN+A~b޳#+ø $[\"B]VAJ* (—1&|`mmHZD ĕsu'\fm)t1e J?7lѬF9n[hM6垴@-JSk!HyiZo"oB C_E)$m~ aBh[ V/I(DesF>O| d`F! SeÐJYm'ݠx +e}HC-fOp}*-SFu)Rwa]רyQ+DW'tkxn#AZNG 6y{u?YT;脤;qIڔ#|ioN~bW_γ:JEI'Si Jj')NRJ[thYjcݳqIxK|8ˍa:2Ky4(&GgNQ^N.U^x1{}8\L먷ՈI C\B,hs=:vɚK4H:)-ݮw ^Xm =yz4{sWzgݓ?;HJڒiv504u~}XA0oyX>S` rYKC{y\Dմf8\:E :;y@/=rsVWzfSWSvG3L'M;읥'?N '|ؚfE%2 ަi+A%&홶qik֯$i7X?øOO)]e9>&kS.67)ȢF@?./΁d6H+IBh.΢^b2_ Lw}\Ԟ}Io=u'8Wn8ԓftOpLޣz!˘)A @^7u0CLy !'xx`TH|7BޫQ\SWǷO?EOc?[IF7ɠ_R}=S+/2?y9([ve:a=Bqs,D Xy{ {BFϡ[gJ 7l2y/F-x |4n{SF3cDr< +߄F QGDx!-%qoE"A"VH;-Pcl@YZw-A@1b89JXp>BpA<@-[`pd&*: ؇ Swsc}bׂlkSZ)mvK#nV=wh0Rɡ@Ų= ZS%1ljA5YV#[iX]ИnHnY Q,L_ΧS=W mVDQ_= #H.y/b-@ɵB+^UόF[&*pj- 1F_G%7B$79FLzLjm;cgNWٶ ٜ@NS^yTHꢫٜ'Xv IfO=XJOҧeKn=;{/lǮdJFDv6NCLR%>5 "oHxMG Jr!WaJ*`ɈX WJCr*;{T#^©J*.\N6PcxڈL,\47L*7BPC#iͱF@[ ;C3(3:1Fr1 ,9,Y< =elEWa;KΉ˙Dې8\A/L**c2xͺ^؊c#Wt ItR6ťؑZcN8 v!Eƀ*璅2հӂYCDzGB׆йе"Ӯ7[+:Hs=Zo_|(f콉da#p]" z2" ;lvδR 1m XaZ Wnw ڍZzT۬B;Zȡ݊L&dVv;iI.'r5jXo'*Tw5BYR/.s)nECQAЭ|{c%-BcK UM(A݃wU@uzߎ!b)sdY'R$+eT馅F J Mlf֤a"z0\+ %/f;Zz`{^轩J]xӑfX4xR,i6q3 4㏑zUvZMӍ^/T!r$AuTJt=/a*AƶMuM/fSMԎ8H?*U PuSCc ɧdv|Ȍ=[i%><|#4[$-*fFv$o: T>qX}x(A\\-\0QJ-T5*q`\~Hr9FC66崊d lbXVV)muǐ{dTp! tGVd:߁kE0E:S-UAand; zVXmjQ=꺮H{h1$n\| DmڬEz^^?5ZSTCGgS`tP.yEL+oc -4`aN(S._.!I ( vM4=9 ('5oNwͩz0Q,xw_poبq.X%;yԾbwׅ QW md٤qP(:\@*(SA$=٫S\"yhE׭.XNϕ3 _#;5ͫzBӶ0ñ; ={?@ 'l_z`3(OƟ/&s SpVhCy{mMaP~ESp1," >sJB=+azWBujcwyIw=FmIč{i1 ?Ya 4'V ύ=L|_r%ן]5 x1LFEKQpQ{wB)lQ雰ظe"<^JEO'W=]zo bݽIӰi\ m# ;b&G1gͳ ,ޛJA0kā/\6MrSY-%|qHbH^M\5"kWe7+4 uhR8~?fς9*gq26Ia j+UJfD0/'I˕li(3䥯`%`I >&a|ѣ}:$Cr~x_|@s K H]rY 5}W = r׳Ya:Ukx//"(8*dV>؋ZM*q;=鸨0Ij3&l8׿F)};q~/)0KRQ&u$pږP;ɱͲ4'"2éU`Ԅ*;geOgGy ` {K~QJ)RfHqyJ&w0&TԱ=^kd2Kl/}XWء Ty *bu ׊=(5ո%/ձJ.G:^߬O)4$N=ޖw6#ۯИ ?VKgDT<:qhVMϺg痮n}zM۳޾} ++?%+{J!f^6K%$|OE.oa.YآNKZ }-]! C\K'=}26qВ;١1dEK簍q-S_CNOezz( psCxC"D#lpN3/lH9,N KTOI̋,o) klpSWrP][_-`*weahty5G\Y:s4/CTBOX~ 0Ȯe>WwN}57?'ˤ_+3Zc' ʵ`}V-;y)<>dpᕙ/Y[P.;IQE۾+y<<=i14o? 6*{=Wfڇ5w(HQ@ QEkvMntPXZlpt:u/ ;۳-0ro@pE826ʹL,2nyTΚ(xF>|2_o6ޒ5,m!dGXa8i/GhA3a=fxCC{"v~@l/ A.`;@G"^xNg}lp `|8ūU*/4 ڹՔFijHQÝSDTtNES9S9aHDTDl9SETtNEԷxT#b#RLp{-$w?aq0,۰VJ}IA$qаl09SJ74(1Hk"6B'@;DONar*sL9B(SDcXFIJd 19.R%  G8_mX+qyn87!~|FVD!' .'۹Y(.QUEWV~\/n^Pb{MKm\}Sn2ɟn8LNaf`k-.PH83{iǜsq9uiZ+Isa9o,v}s Tdp?]X5>"HL0x ɀ?RC#+|_Ba;7v4Ú9MO4CF Ikd9 {Rd$E9̄E}hN'r(Pʍxrγp^ 1n 8Z)ese#RHDq/Yg5[sk܀;B,-Q:0CtGcYGq4M~aqƒhq72t9s ZIX?ɯo?Mfx!%(l@ J.gxQ.D#srI#JV v?V,c_t ^,*>-* %O=JQϰ@u_,y 3c9Ř3wz ;1-"bJ Gh1-v Gh1-.6,VHp&h1-pb8ZHLRXrXrAr*)."Lbb怘9 f!0"f(fb怘9 fb怣0)RWfW<OqS\W<Oqu%ݝf8%ש`R,(\ 1‘W!%.!d+Ryrs%}({@u:f^Mx<02mdk˒d+\k ;;Pq<ϕr,3Z1 c3I4'Vc!m<*t"UK(0p".4Qʽpp-1G'5ΝFműnwT|56@hD7@58FՅUp{~`Z?P2)V*c˖j۩NF?íw4y2-qs0&qַɜf2Gsí0X!gdN8'^33`{$!sI ^ ͋[S ߂uND xSRYLS+͸U#$/( wJ < ޏ}mcHrI.2KXiMe(f#QmgQYθ繉ttA'`&rLZ0Er09BsKh|lWz4["xl}Zϗhy8PI Ճb%S+CS=gwyrQG4ʇ<37#>فԏ'Hl+1,a,)3+3l9w䐚KR+ ,i^>]ш-'Lg$X "TX2;Skĵ@k+ݷNYW0YlW[˫7!n [JckqrF{J{o@|&R4Mo6\O׸O5ʘp'O<P>&s*|gʖy_eWz4_eWUy U*[ʖl*[ʖ8̋;7-q;[[a*{_y sm F]88[\J۟J-·vp`uo;{=8`Ԇ'FGHHZ_zZZB&=ng'g}H0L՗٫!]?C_lʗ<'$iH@B~~}^#\hy1 X|==?VZ>.<9Z{0eZ@Hlt1)ǷG WX g9ous)MϽyu}w:)?I?J_~N~wt@ޯ >{oݥ+ik ?>Av/t ?9_A^![A_Irs.s\5?([n1Z1-Fh'E}e鰠P-6 }ڛ?؝[;}aD2e*Te%a`{YU b }1S#I5s.d?y^xՆㅑNoTXS_{o/Em>z}bkrn+Һ\_K㷿b70^|񀋁xwg9;_6Y X|z|}ۢ.ٛヾ<{7.ޜ//M@ 'wc!3t#w#IW_LbD,ΰh{ej q;L.3\d$ n]qz~uB]]^q  w{!YlZ{MWU>Z4~Xb= {{Z|6oYc u06/)Xf_/Fjת&`Ȫ09 EGdV1׳{t<|~^z9]7vƟO5gc0gƻ~}9joUZK4$m dHU&meY2j[ޏ6^F\=֮#//L>[mǕT.̥hU2 d%P-9MNMkK� J,z2|/b"%$'х2fE,Me\L\j,~Q4ROwz ';X+MgOb15ZErm& OB,a.FL3hzvzsdᝏYl4Zڲ:$xyB )Sm_щ2X &>tFVcm8'*RI`Ef-\ 2E%da]o ~x4 AJNjHH I?ҲSPK\0s;];|VYb9kOeFM*\ J{+[lڕ̍Q#b& ˤ0kJ!B QȎoTddC0!_& D2պ~.c.E D9%[0݊ NV` <6ԀӈEiH_56һ2hF'-s#ژ[M`s2qFеP iXGg*){".3 +pAT4r^ȆD/9TUdq!{X EW Z#آyX.bnNc q z)Z wU~EHa6&~P}2%2A\œq~R{HE$;6Bcλ|ͩ B0@Ơ) e2b(4vC|;ŜEN0t\"0eWaLJ.̐b]K@75d&/vR A3h*3!(.0͑`+8gI8/EzeoH_uT()%RٕMH ?j{VP1bc%d0OA:U KuZ"5<  J9ȲZH:owi]CXku@<[gnpQmEb( 9i`#BTPa:I "0fϰOUa‘ @td髒@hKQ#Ye ݇;)<"Y7mq _4 ~a-Ԓ"4D iyU,}p|RTk/HGf}H\tWt^F=Эνx3 nCf6یs"Y ԏD?^_ż3rp 6Yr& N";>m wUas4yq*q>e[}%0+pm i,{Хt13Ŭ<$J,x— 7 k<%LD) e@ PJΚ%LFKI(Dǎn<,L>^|6ٕ;e,u惙3`0bduJpӃ e#Kpqص6Fuf 3%Yds ` @Ao,Y 8\-Jb,0l" wU&B@F:Cє КKd'tXWHgYtg& HZI0;zhj"*^ `NIc-!Jw+ a:H؀ `>oO^˞%07LMiXq{nzv`5øM:891vfM hD0u 4tLDZۘ1Ev;ߎ"'4Q$+5fZRAΣDjK7fS#k-0[ 3B*&%<%2` rhSbH|(GЍZ HӳNJdt A(L $CB6z DzE@z[5`}fNM6DzV$b%*cJ*@j'a~{?"w^^0`\SL}BN28FR%HԜ0YY:@9E@ =JmDR1uTl:c~ zX4k*U|RվgR&C` ZWY[B):9ݳVKyנ|+M?샩ob P[;i*>@6`ic NRUtat0q -!VFĸ)"`p g Q`J*]\0n5 DaXg7vV%Rqԇn"&|甑phҨ3>FCt]Xz+9b$diAM,?u- CDHz+e(9B8n'Wgqٖfm A~H`\AT WeWo@N׭(R'`h\Wǥu<݋%!TSm[G Աx:0ԣZ+PV:B?P C:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXBP:,a uXB+}XP c//k-/kmN_zVmyn,Wcv?z@Az<2#Ȍ`-=|Q2 39􁜰xӸA2˳6Cu:6@$̐C*~T7\GQO-y u83uVwnZ?rˡ!m G/^~=jPTw= V"gE8+YΊpV"gE8+YΊpV"gE8+YΊpV"gE8+YΊpV"gE8+YΊpV"gE8+"ǯw۷Dbg=ғe*ZocW=]Wt_\&<{eӒݓb#O@@5DXLbLPYxLP{hLP6LPN(3qg͡Fr֮NŚaK7e2axY^ٗ3,!V6PaH&K*y XBtǻ4kâR fmEd)*s5pΦq!!x[a'λ }g/m4mL][#r+ ?ŀb$Kr!Aٻjƒf)VVoSmD/cTdQUT>ko'8oy*IWЫnU7z ^uCWЫnU7z ^uCWЫnU7z ^uCWЫnU7z ^uCWЫnU7z ^uCWЫnU7z j>UgЕ˜ )DOx>Ɣuj[?Z.iLMt oTxh}7R[ o0aw]\Fלax̓) cշ?/kVM*^˻,U?I6Huz`u4akə`~ n;M1Jin&>(N~{8)P|A kUЉfBi'VyOk*0 /qkYG8Mc:"Ǒn3! zx^kIy]yID$dR$s; Ө#THF'jZ#|Y9=&D|iWѶʗ VkEug/u>5#n(v,ewo,ބ1Ls;hI]g%R؋GG8"t:Z6R]UT:KX̞`\9w|1m;ő/ɷ5 &rVLk0EW@`R%UK&!.XCδ<`%򉕌!)[GEo-O:"Ǒ.NyPԆD'V5o#*י0Em!>[cѦk NJ)oԁ,V8@ep59EGf@J`9PD`(|Fr>&( Xk)D|4(qtP`&'14/e$!J@WI]Gݹ"+]>]쨱m xK1͖h^dEbZ9w/ZLhqcE+tWLZyqs {2 qa΃˩Ŋ8+ L'S{E,Fww2qG8Bv}m 7xQ=Avs"%9Q@|UڙOߋӾ#jXQsZ}-,[%:$b2vDãu}tmX 5'-DE$/$ٺ.DkXl4&JW({U4Ú%r-S:bvPclo J\Ӭ(qxĩ>G4FQcj?HBwB MOp+⢀9ȃU ^% % | 83u븺{ؐ]6p3P'A>e @)9Ǡt=kXar&1sLbF2o&V ($fL'|Mā{7n=kW ll0o (')d=c;~iԽ=AJK*xԓ?PSnp&ׅ>K՜]u݀ܣPqὊ$)I!ƚ$QQdI6-\m@%*njppvNa >^,bAiQl >%[F'%QZ(> |'LHg78ڶ&_(`;>-^i JӰ| e{DԉyP}ȓnۄDn|l'Č5/o!-U1XN#V:E䄪ImuE>R̊ۃ73 ĹhcfjFd4RdNzhCL>!&X}Nr%Yȇ~3~8Z&-n3!SXZ? T_$򟏛jxbu;a{Q㈿0zFTW߼jzo~d)܄d;wG&6wGJPO 89AfY/Z׊<PrWOj0cф&h"4n8;J5a=IXI!lc.H[]MX'UTh}OD]E>R䚖+Ç@bKs浮EZ_bxʸ Tq0SE@22fXmf"|mVc°!7jh6V^ KBa%f0?Q;}>a!vga->.jFD 5*c΁DlCVޱc8C pCkJ @a:U5Q EU''uw@mKn! "Sǜ(wcbWzxEۂ5U5 ; ScΕnBKQ%}sL#qjj1DZkq.Ƽ!ۊPz91mW("6rR;YX[EF夝g147!qM*`<¥>o6y\(3Wg|S^Q-'b* |.ĜҪP{ٴ\z9Ǭ=+s5 <.` AQ6f1ò"T_͸\0#hP}YU/x~e mVMp􅂲<,-?o b 5Sܺ ќFdv<ڈ nDE>Z䓧^hu|"=|bv6齈Oô nY}$s(Ԍ@ s-֕uAZMeӮDOݕ"Ru% +SMBoҙJc_*c0ȲfTXyqv<li8KawGG裋")r9a^4N'i{J^.!~{x!+'R$S0Tr \=1c,2+ճ{gۅ-4.yꝻ׺(>w0s][|i׵S6FQu'3ohC(_Q1V0Q/jjGG^g k\\SYT#9{uwLW==U'x-t*I_A#)J]*@kk(9&.Wk PE0T1Ľ{¾|@R5Ca$^Xg;̯- 'r#[/w%0v%R5z90Z uey~W+S9#3 2 0Ľ^sySE9WLI#y}Q̗w CMROu@s`|m3߫^IRҔ1n*0ijLSIQZ .q~WVv/>˯+PcKVDҿMvhv EܻKv-E/jkGpgEI\C^9{5ø9(+\l q򪳣MTz,_O}1z=)~8/~861 8<3#La 4YyƹWÝ爋gGq urf {ca_lJUsvi,F"Fm E;(CQa\_ѵqm~@q 1V0Q &@kF=k*1"0Cw0#+nr[,{00ĜbyWR%s. }X6{ce3kh1[m.Eb6qȨ,EtT7T-N~FQT0fMAG)>5琚!aIMgtp O4Hs{DWxWxJIE {$S%,9+y~${72z_ڂw(mS.ø?\)Hmۢ:U\]2Qwu bDC̩WyU,ۖ=s;q8_yymbHથrU-Ix9%y̅`mq5Eu4c`-q +FiUC_O)w~xfRLsH*H ɩar|W:0ys羾_$D\KEp8b$k؜a/6\hDa2ʊDK4{Ii⒬&L+}>gQ:G=v[y\ޮ@A իLJoҎp+_e_?ZY՛V h$1'/_m~/>7'wU6E$yKjs9 LMχ`jUwy5Å_o駻iqfO+\o?C`Fwo1m>=z}_W~1O_nu>г_}N՗wh+堮Fes6Un7qQ.o;rT*]Tʋ &XWZ |^xtLr'AnV{>Vi:*}x?oYm~{ȑȇc#8pl$wr\R&)ރT1 ڀ!K53UUQ5s)1iR(/eWV*ͨ-(Lko:E)ʼln<˽}XB~7{)saͣp?/ڕ߷w\n*fѭj>!~uߗ7[ H~xXJbrXrW tγ/-߷L34O;/`gWZJSh\QST̃ryp\ kb Kh5*oP1pxReY&*XӰ" R+p/wz(T)mb)k[/VPgP̸ sgG*HbL8iW1ASү_6_} xeų2Wind ӴmL.Hn Rׅ4BKIxzu1|1IX6UN{B|1eRQ"snIKIi4RxXX=_Ƹvbӏ5=SD{K\oΧ+(e 1WEIxa RI*WQ-79xXbSqH} $_}Gn:?OnO7N_bx ʺC8EZrK.Gەy:v8<ؑ_I]k6BA<߿ٟOz8dO4Y9>y zl>lO'2 ӘwX EBga#I`t^߯@a!v|G_j)[/2*^p'I?L`SX?@`?Y 4v~Q)y=`%7MsSL5r K7{kNbX4n'qn'7i7QQ`/k|: ZlCs֜y90 4Ǥ qYr<ߐ3M~RoNY((KYE̸YC\}`+湆\qj+=XRKO*6e0l`!r NJvcLeL4|A/f0M _0Gl9fW ^Ww_j:IC8Bg >LkTiX,*"u3gOgf#nj6MQ(̰07#(g<?%xk߮pq@M/E\Jz)qi&*GE$,T ҠQ+guւ\j(W\38wG7b䮻(a'6u.סa8v0CSW'N 0'op)|@*JlH2(ăS[d=rn1rbsV5ܢD%mBLQ R ;';7Y]"%8<$F\'0y%΁/C_>M=}(Ko} s?}#\8W{'H>L3&h\2'v; oQ?bCԏ!kĮ*:Yky|𓜖iՀ8N0-+8kw|F]ES,6`HVҼu2vDq$1ͱnq>5!bcZ_o|+` )BrSc&Vi,}N˘"^ M}7=F5XY3^v&I i୍(b!m噎;o|+lML3&9ĕxrz&েYO"&Owi>Y]cjdQ <{ z6۹tAyYrNpuWzjB}w=Ph]xz,4k;_fRa>; 1W?1]9_=M eSWf;o-@3g6 Lfuf}ŚCh2OšO ڥ]HH6?*{,k*ӿS4)ݬe>&eݟ&l'AI%*%4?Pg~7}GL5ſ'zۤ }hsw8Jﳘl`^(|x--p㺱`)O?MŇP`ib&>z)uLh6uȠ(̷LW ꗏ՗ LyE*|ƹWa\@jh0皊oP}vwEpܶZ1Te6P$Ng^Z' p!ip T!edO`I"qF&d0e;-؍0<p?zZϯd+n+#PS&PinBY"(r#88OW`L5S哫sdKH5G{EUR &Ww/rTQ]㭏[',|&,&*' qA* |i-t 'a"a; U6Ș{7{4O!5r^ YȻʺMgٺ W{}vBZ6>XXeJD~gP S胎0; +.g_1JK]BLRq1'vEzR*FhBȻAXPo*#=M+IϽq+hxA9A:8 Ilg+[} ?bC ?}[$A1.26PiAqi%31rCz)DG`" 1_@ 3P ¬d%4@%e >HFqqec/r q]ED;&1 5(p)ll&cc,s cJADЌ2=Gy$Ǵ\N5PذG"<1 {sfo[+CrHTjs̀EQtuDm8\ ~3sE쌂Ql9GS("1x&%/xk6l0^2&2ƈ3u91Sw4(púhLhEUz85||ë0e1<"Ǒw@-3r\ލrY[47md;L.UtPZMMJks&ޗ*%6Iy&G<EU'W :9GC8"l#gԉ#>Rwfم(_e)Eh;Oߠ1櫔⣊elJA{% P"hB gc̃M}FwVBeC0hEL?//?>>XQxl'"VLNiw% wJuIòS1joMȻ\K3#1|ڤz<;صkp#3:*{<{E&4j< =ƽSK=DIso.fF!xJrdVU<8$fEUiCV`ZK_ǣXJ uw#<3=旿c(*$I9c'`"rzʀTZ1O7K)P]ܣy1 m=!꠮e DŽ?3Bx!zLSPnxt5Ҧ@4 Rc&$KF+LW(v\̶ѕd ;ۏrzFtHg6!ee3w{7FʏLGwc(_u'.ot1-c8C"59ѿx% 鍃P> >V1K.:%&M7EU֌&v|s5B4Q QƸ^څ 1slXNG79k9'~XF6Hm.LEU}z~΄McTt&L%QnO5{ߏlm[v)cnit%UEg|[8*'voGUrYuTJWuǑրV Kf\F~K1TBqÃ`X⌊|ɨ(*{u#W0k+\iMpgWǞ"*⇗A` R!T 1eG9w__˕ +KޛE|1^v"KOt/y] W%Cob{^hw௄/2&׹%[&&F )(JeTL5C-88^T,W )q5Ua8Or*ܮk(!wѹn.4:hV0cDI슷k_,G1VJYS*P 㼳}T At!v1^%o&c=G1|'E,L-چ`^G]&bw\0鰎͸ a5fL9FAU=#>AMy%ǹ@r O;$\wdG9؁F&R#x.Nks79P )G(-(z+:U' tVkrY!R!lJh8M.~᱈rR HL m&: *f|ͼD]/j Eϵ6}}Od2׮bX;&es牵i#f E1:3AxYfKLЂ3qO:AVL<:8!/L?Ez$8(C4\3YFH҇]8hdHL>]p!Oa=t;Ղ3qb$Aq6f/A s-8<GҬ'^X@m.Qd{b< /`3z00<]}YV`jK뉇{Hem%.3g o-?S &GCQFۥSB2H`m3qXIʗuǻWB6T{&I}҂/qbFrZp%x0 2 gҧQpSx?Y Zpx&> fB 0D;h'G\ډnQAZL(U M`u<ӊ/)6N/ø\HHDxAIn\X5q^[ep즀r8Fwktp O#U;(\ d f#ߖ `YOc-HI5EmV6`&>nc[: Kjx2nx M/bq pDU-Ize{`-l1p5A^Z+kYMfR+b<7~5o̪ W ry9)t&N(&4y<:|5tplWݢsmGncs Yx{ _僔)x}=ǧ1f"r|rySbLf K$ \I+ckkU0\nʠ'_uX1V*z+pd#׽X6G(_d:Ct9.cWH;*J]U׀/rD9ʚm7טghD ӗ\JQC/s|C\z(Xiv8c=jT.FvNhV~Fbd:qpx\.+7Q(FK]{w7:bFotuӞóۖ,n9ݏsd'i?ܞ_!B7$kk ul Q":`쩜2Mi 7"ȭ: g~R93]j胃:(~ j_8f6n~t=\qWbRAW g_ ׮7!nymq/U~fz9 ϻ=| |lŨ q3*s#JԒ2dH*1NE"9?Ф*Лe|20T1&mhPk}ttx۴&/gC9RZJu]=%dVbjQLh*rRظwq,0oYWqD9zܯ0S2C4ƅ0?\M} f:u\ր/wv.Cٴ UBS]Wz^T߼̚aE"}O>zZ헝4.3h<)Gbލ~ⶀb~d][}se73 ڮ_"v_ӣfՠ<]Kkk*tWqcƢSr'DTN#ӜSQ))eI"XpIoߧO?l±M4tYsQ|f>O"j|2v|WVkm#Ga@0XL{6׊d;E+n=fKV5fRdWY*n<>ο^цR:U-GWa2ZsIA<ȧ؜3k r}r7\O>>%o{r3Pd j؆ Zs$\$8 J5gyxM{X򃶷i"UKrK-^֘]\mh}'_z}8zKߠ9k2^5l~}~i~y=^BMVe7kh͟N3%VeV+J姲ޥ0~2m4uY}g_0L yY(%(!.G0bq; T4 ӻ;i :{pG/^!cBLC]o:pGJl5`]wy̡uʳ c$1e3N.eZy۞,'UTb aN@#kӔAaM#U6x×_O<ɲ 2@r=zHhC <Ȇk$wgG!go^;] Y%2#X(m}Dcw88̳5||bpmއThN{ʹxMIʘ;j_ )1gG@|Ci?\ɬ ̌(X kZ)iUqcGNRb;'[_׏ɘ~LíY`QV7j]:h/?|e 59^VX'FqeߌCUOeoVP,=Z*_~NoGxh"|hQuo = V`ScҺ2+ð2s;0p0M׻IBa0^9,tܶVM2fT(D&WVp kZx*&nuƌ94 ROM|1su¢YO'ѓn5NX2%Y/ !4o-S*4%,GFrJ M70#@';]d~ ntbk|qF?m-?}?Jܳ43wO[e\<,@#+ ԳqI-}"ir6iY\&?aG AJ EZJҟգ .B_yC c,,{]`?]:Z~fVv܅w|X?wkg^kc^uXoHwUozu]_B! En\-&)*F̙7%ˎ6:&(Zr(0br+nxPp1kք8+ 6K"~r1^bwE-Q>P[Ɖ7v(_+,V~$U4o0 5Z׮RJ, Y;9ի9Xo*YP(|^lU½G~` `R5r SontIdž Gou=)Yj1bȄƝA;S i'z֥e30>,͝20:.W#-j`T=?LBÒ5KDiꙃއd$>*5dFRD0'Se~ð9]Zj'|~;%L{$b< X%LփE?ˆM񬑁łjD am$ʑi5Ga8q~GTZnh( DjЁH82^ +-kG'+fQ=GCwESfxYRRdGPG:lvTptp"Os3ITsRdzթqs+rt{&6v5Nc/QCWBqe8a-1o 3@qD-sK^M0 yE09d$$҅4 thBdVn{d5gjөѽNJSax+H`58^{HU 8NTYA\.]ΈuO& Lp|J$$TJ*IWc 5C~T)@cQQ BcaP:Rcqϯ!Vțا"Ui@-sKCH7z!d9P3]dІV s|މqqćeSg<:bp1=Sqc(Nc:-0&;- =3AhݫE";ٜ-sL&{3B9z&Ev |~=;K4GPeځN[o99lTq6wP*zH0X? v)FHÉ gb`\ @b@幮~٠W2~t Zlq/92""d ~jMW.7968%WJKR]4)$QPQ.76pDFn>eOs+eR AU)BA>ghqPVp!S,W$qss8t ;rd%.3q.pS%y@{&G|M(z#Uo3q9gQ֍^ J@v|U#.NKy\*%:XtQ@QkMpH-R NB\ 3m a3UAYgPHUcR"~,`Ϊ Y zL\#vJa D`|YVٝgj<;D9 tQhI:Itex$Z2g~OO|mdm9z&:8{v8~;y Ry,Y9j4`RWnǪ٧Cq +3!{VDW/tV!\p.PTch VI2͋SN4>Nph# HM)6Z3C>+e nFLkm@*Gsӛ0)\xywQ4lgf#xyjСy9zyܰ9z@B ?Ta(+n tsR~|ёz' gg KVߵ @B_(-+qBi@–4:i<[q^`0eVӋٗTZa 8Q '6 ~ FVGuu۫ěfDȤ'bcR RXjcL{1SZ8[;͘*ZYJS"JSuk/[{Cl셻hf\[-OnףQׯ`s~/=`}d6ĎGuh`)/G+ iG`iNۃ,~4"(!.GEK[,n`%Fqv7[#xG w;~2Ip~iod⸴)B6j'ê0mݬiQ3}7cLs?wؠJTE0+i~ɾU^qV^fJ[Xz9V}=#ȼ&E2cbѿB\Ľppuxk7'߷ ٵ,3}?YՏ a[ Qyք^FbE~)$  . R @ r92j\.Ŝpi8n\plZHiELh+R+9w=Z5GV:Zڱq)dBw Dfh 橖QQkCļ5vՀ*Si$^nu qö!Ӊ B)bC bϸqҰ&D>PaT Eʐ y9;kdy@K XSbu[ybٛNxo[ݒ@` HU]գ&%#+@(6v1=b iZ==Ŕ(bb"?Lmq4tOMDzKv?X0ȸ- 6( ઌz U Qb8d߅y)1v!/碧\ R0Wlθ&^%"!GY]"IY/33h閹mߺ__, GN2xl`^R̘}Kpr.dyrh%IPmTSghj0.p@Xb_r6cs-ˢ:=ty8C[`#A[qB-[g\,av:'>hv !}ՂJ9E Y&z<cٶ,mSz66C3BX(u:a#yQ,=s\p8K[n%8Ú MHE阷8l.sI& ȏk;k  fAKĕz3 %s̒p?8hlM\ d"JߓQ>j36Bm'QL/e(s ~PKL3 1$+$.(\F^NDa eɣ0n6?ē< Y -Fqz|I}{_Zx9s/*g];Q;ec:U-N2 hFHn(ͻ|oWnԮQLjUZK.x/Tnkit9+tC%K"ЊhQW 6 0gd|g_}<8)"gd,LTϺ_}YDk1"}MT_Z%E;U)ˏ.9.`x {|t:`hM4`yqًϚR&~gWtk_/w C`_#TUܻp0TZpf1#i{A_1Vznv-$bC;!Je4s??: `Iyq,OjnD(ؙz(xSn'W~NGR6+i<6;H _m?(wNT'ޮ$\OpPϵm Xy6e0#>rhqOX0-;0Np770wlAReʅX,D|O,an8 RrjtarPbzRlZ3RX(08n=9Pypt\=>l[FVi@:-jSMCA#6)mn;Xز,N00'˭zq 2ݥ֫ٻjYvpPjT!81e=8(I$ 囦G-%M~Ԟ[+ (ϑaRf!9q< =='`"0K}Y9x8k* 5änz㚰0 )#X81]K`v-519B+fzT)v\Qbء6wۦ`pa2A1 Dl/. !U]h~8iԪ4+ ` m8"H4W5a A;Ŷ X47>yIbMyՆ %%Zz\i&4Ɲ?]}#GvU*͏qߝ-dEpXm7CVjbAΝ߯kVNOjm5 ZRۡ;V1a߰/v65Y6,%lu \&k}\i5qŽoH[aű`G@̄8l`ds8OJT~h6Κ"f::ͭJ84ZS8;jgQ[}W,6| L@.cنq6"0bw<>"E^R6) ±G)`f0qP6aG}mƠUZWQ"&r_JI80/cPr6ASU9_T? 掺'_+Ms3s4nMt~J*{z~PNj?Ga_ym׵"NZ~ϑ^gQY6 MnB'LB;,;՛ $A %ka87a_ɇov;~x! iOµ1㙀2w2ut'Q1휎<ҏz '4[ [dN=7BMYs.8if@rpR- bS+ЅcOhL =VQs"^A=ʀ0! W`%.1C/Rɰxp@ü]CMJ[ؑNˆ~/ \'>؅qcږo~I՚̅)d̎6z(=Gi:#qPfX*d([8"&sq=G\ l%n`6s6(l'!kyy;y;_Wr=OƝDu綴+GWE7`^ٞ;*`찟j]bu|?iu,NG5aQ*J\Qx{D0@Ua_"]]z?W3Q=9l;V"D/=.ҩ{+N (f TQg_ |5>01y4# -i &y6SNB02`li#d?J2wƥK8/{R|huzn''Y LozZ>YV~ip,{Q';"+/[sy:iٶa oU& k$=,}Y3֧w`~5*+fC`$$otg4S^iUNQeAO]ԷMYB}FSQԪ_j@&<^p}|v@Rê%s)`I?vĹ TjawtM!ۨ{08p7]o֣n3;II^z m&ۨl@*wY-8ڻ~4azAӧˣ,M0h@s@ ]n>9#wx8uiԥS&N=k;4c<*-6ZsلrGTLlBtݺ;Zq/zne8^R?:iT1%hoJKk;طVi*9Ĵ]iK̀4 H[{h 66F&9#̒>Zx Sma:(c×g)< x mN^3,lcqJKp!K_@ ޹ jq ˮInT@|R^[t>놰(<61~$wVVɹ/@@HRVhkD3ރ8p}t6 gܳ'}YEo*AUdk\}1[I&Vg-z~-i~r+%m4hV35Mo\esB3~ '#J?x0`dfsptTJl" xBE`sNʿ `dl Wg]6G˪ΚӪǚg$ d+W`Wu)fY%l'\M,d>u (^Lf5yBewBmP ;ko ejM*;tև^qO cXg r¾El/ż$ ܟ/`+` [kw>{d0tf383fL}Eu2Yny}FUZ%3%arVI=*M_UR9 Ҋl܍)hp VV'n"?_ͬ<,oV5˩CF9O΄>_՚ ]6 zu~sd"^wK,4Py[[n%%(,_[Q`β)D?eCceәpΨ#R%LyKzĦ ۞~ο9dv L :fSzzlY QWN›n3ֳs*fHAK[[i xq/8h4AIم.㟻- Sק?^2mYEbk($bG=v\4m<Ay>ŐC X u4qю>"c7=D֠hVz,ZcޙO- ^Ѱ~G l;E6G`(c!سL#gMCp( 7l5}!L na&[Me d{dm gp=293f$sRfsK-4~v+'00 ɕ{t4!|ޖuF>󻐰kAiI-Z|okL`7Z9 ar+m渆A1 G_65,&< M߱,0:\3vXguö`%U>mڒi" A8wmJOtH,$ֽ77q^}'NNbqx#K*%9q]7Ŷ䅒e;%nX8of8 )oEʏU*ԛoR Ĭ䦙2ρcIptFrm['i4OF:Vi)̬d |W.Grn)6ͩe606dP\ %S,v-)F rFpy`b& lìY.\4`2k8y$XXEfXm 1VLE.p߷jm]#x=I:XI2wuՉH{gb)5J[LMbaHNqNkYjh*YcU90@W>8;$K8 R { ɬJhD ,Бy&a9Q:0qq?<~DM SqK?)<΍˜ ˜ @o`8=3>k!yJ3aGT4R-T>YXe9AH~+#`.g6مmgsD;|Z 4_ 3.. 8 /&y1r-P΂(LkAOϏ`89}J\BBCWVag1s}qu4w]:g_$B]E'z5dKS!d_ ŷq۽:_E\W 'Kuj]2 Gc.\p7Xp=`aX=`aX=`aX , {LaX=`aX=`K $|=+_< C0BudR!\wÑJ9c%$2dۼV2Ud6As-++p,A2a)Pc%ĜV2 +<ϖTl4U{R2$9Il1,Qr֜IqdL\FO.-ufp:IljTdP wE@5H&T.#kS&FԐsֱ(7\F1s_V֦=7Ո= :v̥s9-,IĵọA_𙱌PSQ#I,4[e`}l$W]dԒTiM3cchWr,2YFn=1)9r)ƒOL1w Hn+ S^- sjaߑ?7["w&&Ck Ugf[.\{w?KXwtxފkܑ!P/x[1m@ *U^zt;ctț*kf  p07 +5l287kR}Im6W<Hs T ɸ6ꐿ{ӫST=|>Q5:z43~%764ΰi\Tמ^ç:v師YZu]FgաR(ў-Ĭ$2 F, <3{s q6b %`CCy@8dT G[+1z$oʌF9P3;2 Uy¼wUaOWʢ)cpY*1' 6S`Ԛ* O/ZȎv'Կ\dߍ'3u-ֶݸJk0 z탼V1YOv7ﯡޟE>_Pۣݭl?=^IgA ~[t= D8kj\hǝF-8K~-V?}pod|ڨ, z_M#ZjGtm1GOwUC;Ȗ}TosFw7P|QP\Y~ӦtwL'E.^=A}uo^EtމUmzsLծ߇!c_hd6#NB~@pSo\m~1@)+u.eBQaQzl}ѬY!7϶z r)GcxۣGO_>|ųϏ[djϭzQ{#'g<܌o,[?4ɟoqdw͞T\ ~9(@C'Z-<_-P=+{\ȷoz F4lcۛvL#U&-C"}2h!0='߬7)ᜒ3m| ɋ>X~q:uhw8pH`,+а#gɼhfEpt̵Q8Lcif+:dg13,`Ւ @,;by p .&Y.ʍ0qx8:^QCG?xMW&x FN`; 5+~p$[I|'5B}sZwqۏ^2m(޷sL4>pnkhp1ct 1bQ0}@]qX(\g`]|-}NFGSw`w?1Rkxt(14kϖݞKg>f^sz 3kOzSex)@|u]R>_U1U#_SZkdZ|ŀ@ ,o& t.0aG=v{?YaUyڱv)S  -!6 -@˗Ͱ @Mc(v. {׼S%BrwIƯ1**ńJ1D*ńJ1CP)&T S1BBPz!^BPz!^$`wf밠wft7M]zK^}_c>2%aRI7'F/jrFp q@cN#!$P4i*h<>Υ n2~+ZWcѠG״Q_zgI39k M`,.`k-Z FьRp8E΅J YBJ$$p '-8eY"I05L%/}"TDXЙHs;NJݔ-qᮁLrbVNڠɕF)zR(W8/1 gUӑ.ʋ M8+qݩvwr2:3~(v)a$2J, 6sͥhAD8ʍ:/i|$HnGiLLXCX'OCd҂] n3Zn8pA)\{:-hsLI9]l4c29?=׉_:k@jP̀lՊ6 8ѯrޖte{K|vؒմ<ɮݲ)ÕZ()&RRHdVv8x!A +ݶocZ8?-jS*-Hy))+.L'%KɍXO7 ɍE7]5`_F7ҿ3jbsxM8/4 'unY%c)Ps2/OC[ cr>z^! $fRꪺծz7i7j6m~dWx>:8¶ZκvM89;n@D=sB3*P"灀nb>fR[/W+s8bU:FBm~sOu0Iw㳎=.r2"S Kgx'ҪGz Uܵ{R$dOù 5"rv X{KkhCm 'mLJq}ãܿרԄL>+1 wnT׍ն6^7=o(R 0 (>ixXZxx!c UQ43OIpYb{r6)ijx:(5K!*'IB$RKi/SZ0.^pðrRf% `l)+sΣ>\AŇ  9Eg*z!j(C=#3ǁ t2 Ws,qp3!AӶ g)%X`*GqJx JlS|hvBs4c`_Qz fRjZAm 'qQ|p6ë+yLJRSDXfXb.dZH>*(@]0:[iGc$\^}:Bg/a\kpg~0~B[.%,dPcAcA-0T9圍1G6qaQ4rPUS ~^[lnZo 픏l˷AGeNT; h\`TTq˧F/_>5(KY!ZY7!LQU̸1#.3ŊBFbL8i')qfi%'!kқo2H,3@ Q,:D4-Ap 4HIfWi֒Ixu9NƕF촷% %b Dːq b\dέ5){`{D̴of %.`<|.UF?h♄/<<]Oz놽_/RYy~)_Aש`4_NrTn7#B3IIw7C΃P"..G|o.#e%rdrod8vTKilݎLQ#+oi| }ypol'g  E"#!k*+wvMw{?"/yzڨ/Fzռ^jM+Y_>-?|6jp<=_ o0/g\)P~[ A)}N*Ke#a&oB槻yvO੐"Iқie4ǬB71[ʽaNnV8|@6I,}**AFς@ќ=U<3Cm ip.G[hm]9`8 2j \2d#rZE$b:%=p/.H0<s`6w ۆo# :dsoxFgatYF>Gh:TK1is=tN: I>lE PJlP!qX摦dncPu.t[4ӌ}4N&L,b'!t2ix-[ n3Tٸʃ C,~ 鎃7a rښ6b͊@xwzſtuC5CeOzRf%ޕd8A#Z pr=v+Tx[x⵵xm}n<#BVijHɮ#_Mzp&_尀 )ė~8Ӗ&%yT[oSmBtL즈pCz422>9 N>:M_Έi}Fz@Z.[kGeXb%lomhfFBrMl,JVcmۼm G ͔!<g.f©.4YAKG<,m&/z-]SK*[ n4gcNrnj.}Yfrr=Ih[L[Qi* rl:s~“?tRٮB 6yVLx6o.KhUɓE%VpK%IkӼusWrʤ8 {…@4}ji&oPb{ &ޙ1蜭7xi5/UsԞ 4Kif-{ןTOxzR"ō3 Ly+7*AMus- 5X߾wm_:eG$P(V(_$:LxlVxfr o! b]XW02Z֮R2L{&! xv\{N`s{R~fFGe~#,]~z!ZF| 1gٗυg2ZJl&eE[dգb_70_s:U1Ee=",e{l`= udL4hiM BVPEp%msFKG\KZESvAĭ53+sc ,JlٖFau!1 OvA}&}=g7+VqƹdWS:C+ufFBB]@s &ĝ#O#C[(j D@<'4_`+z6/c&ǀr !]֥H-ʐ 5Ƅ̽˵T8+s8FR螒Hc$\EsOT[W֍*zaq8>k׫Tc=/M~w1hX"=#;IjQqn eKHZh µ=49%xAY쇡6zq_DSrrLJ}ãܿרOYd|v|ĀHS]Q1_7VS8.eMvXjz]J( ]J_`Re9]r|?{ֶ_Qwj[z&lޔ/N ߙdȆڀvGޙ٨Ri ڀvOda䨋ɫ2a ߴ+C.|/~zqdS ^ƜaEHi&UW\!o{kF4%(n5tXZtJ[O||9-[٥WYYVv+'#[8kz lOc ghZz BxOТMq3Ty*7A⸧>U޿)ȹΚ( / @g+;tH˥k /24|X%,z4-i3Jf-=&JSj[ߔ޼9;qlF.3q< n0<Gٖ=.`LFs34qs *]`[vOxCR~dR.TɦԇZ.F鏩P܎OE&N4#oڗ/j*q|k+0t{Xދ\}(k0atgiOG[hXBlN3~$4s%35iTbSD< ;eWml,GÓ)qX9M]2 mT4BbZ\8F&n@6#S7{_Uw>yywxOv[]O>ɜ ObP}Z~|qBh/>^^47^hvB߇}FߤG @S[uڿ.E6`ЮWQʧ5li:H?OQф}{XLT{>r79hWk\W[8a*nNF s[%[-Cqfi[WpV74&fk vbBNTe3YBU[9i6xi*/Yiͭ4 8e x8`OCn-߲IEr/\ƿjW'> l_kO8<*ǃҗġKlfp1.Vg ?EG)mO3%+ۏOKݨSTONnxد*~}/Ug{~&tf5 ( zodMFz27n;+QgmM10VeJ٧(pE-:kh?kQIv3 Tۑ<:fu\̈́ 3_/f?jAؓ4g/ )]TlՏ{XdN=|lK:WTOi4ݳ{zK ' זg¨‰qXu;.֠ HBՊDjQ(9$Zi"yN!L ŃQsw"~^=_@Nrb(:_~( q޺\#HfR ɻ!oM0UۈaZliL(QF:2ഓTonYjԷQW0K҃ ]f%Ek2j~άX)e-Y]P_[q->Ի{DBF77Y3i]q\e>7K 4^ rڷ^/1{ST3*6Nimг{.1GvKTC ]C-E(.9ރ.?3 /1l?NYN~G雳^eR11,p tGBa#g:]JCYSΙ9] ^jh g?Lݕlk'-ܽ?SO{ 4}|k_}J4c~0kP3^iJc! T^Ҙc}59fޭ*[39:jU1<bͭWEU¦)350ʚƧv9~i{ԨD/Eq pIR \PA=h'=u_~~,׸yh=T라׆V 6ryљ㻾NpE&!<'$о687 nx(X 7`R%bPM5Lּ;>K1~z@CtC2X"voGOZ6Tz5h+Z'k*d+_BBG>t ց%¿jF_"0S;9 z~@fO}߄a8>QoX~*[_XӇZY' zqV7~ Pcg|bvyީ} V"?/~@4h D`ky^W |&wjM Kgj R(e }<M+s\v$OUMIߤ:p(3oo2 G/ʘf7ph__LGC z¢HuY+<4 @]Y KJ2 Ji@5$03lhM9BG"<ލ"x$ap&J,ƨ1z \nViDAXE0r3xS]Ru83 ݖyuG0T T01GO!Ι/2H&.pJe5&{(z>^ᙂZfO UTwo!́T!ȫ*fv$Jqw窺]K`_Zt>G^ʱULsu=߈pS/]P6zGA:BxcwNYry:EFxkѕRe7O"~! ąSu9M@1`%ڠLteNY=s'~.ȧӮF]tnv(k$i@e'g/seƜLԴcذLol/K+5:W8RsCealeOȴv-1"|PEU"D>$a؆$Baat[Ր]c&=4p5_})2*3{ [o{\wa8`{ԑz@}_'pt&}38wg2 ^`{1/``>qn|jXP.3/w n%Õ"%O ,m7ƻl9ND3!ih rC \-M_.7|JC#Xau\& -oR FL N LBiv(L281]^ޮZ A fe{JILF#Hض-|. "|vȔrdئK_mW1ˮBX0}`qIex" %~IᚮMM!4_ yWiO]̂fCGܠ2l,W]tXYB@Bnehp_F~j*W7jI7L0b`&t%ж9)gS$+,Vu`}Fu&C(gc9&qLmd %-@|P=J!Js^n١t @h=py.3a a (J!Bց+ wFyB֪x o}yևQ JS6RnaO֯IY?RTЁ{JVXII0XoX$ ta3P|kqQ@y)O03[ -*Fu)WXĥ5,"@{qmqP+8 pꋥPnەu0ʍ֟AQnZp=nzu3 ~ X&h A8A5>( 8hoH÷x(.obE 6:TG`OZ8 ]f0Ar9X _t45,}z>%r~P߹,츚Su9^͛v]D׏@vKvgʛ01K!v7iyo|Sl7ns'qp6|cɠJdM<$#6vmWw`xԍ67}1ׅ j'y^w} ׹9 j;=of9,-n KqO)t}E:2EAF)܃T >}*ڻԉ5gUT#>+X*U-U]׹~u-Ƀ?FoSƧ2Oe0>a|7$~Cd^L[&"X=s=ɼm"B,Us0g#.ė%V"Uxe&v2eڎzr\IӞƞjx֛j"W.]> nȕL}~\Ukr \9:ϭ_FAk䀯څi~l#Dj ,'L3N5NhG'iБ%ɵΤޏ="iy d \#4mS:RPW`0 UтIܤxf}"$RdC.r3°TLI&{Qo᧟Y@7yGGTԢ8F(oot}5~cV,0x:; pBe]?28A!O|!xe:2!e:kHA%#rIRNts=&(D, fDParmUYkLEf|[bsn&reɒ CM1yΣbVɘ@w<39rXGcT [I\]wUKjzI\mDwq]_+ʮVۥ_N>r}?VG#,.x;p9++*f^x?_[qסK!28ЬZ2UwBFO no(B7򥗹x^0 #}jA O \,-8SʘJG *cL:)!wfS$Y^E}!SL)iBL>XX,¢K΂`1^}Qt ̈́RH#‰@4Ĺ2D&u)MƦy4.{6e@٫g\]lx,K#[\tvcx-OVx ;-pm$W1o9T.c ( . /T+y`4q$,q'!#H+En;4%(DbB1%o$oUBc:B8Λ.eedRa_t 7!p!TQ BhAuL22-vwd!hYSa+ryPBLVY2e"h I;(&$`OWALj3CQk"'yV ц1Ycw\65?5OҭOl݆l T/|NR9eX&#a?,tOO)lz˗9{َ~Rw ,~7;-@W#O?|VwqOJ2q<6Eo/jIybͿd$w4/WQnE[.|o]>ozrm󵓽|/IO˷gәh;!wuy7yz3rO۫n5{sSo/}CguvR}sU y86v>b95.E9L?}}W%ͅt"{oaƛ ^T!͆[̏o2~dSzc:6LM/4׶9U?7FOg: z|t澴#M٦b:պJn:.p?m/K|>:3)R ?.28t?ftՀaӣ<__D?>şb O;^ @7Mv{z垡0pWy~3jĽB2 Px&N-p-6C(j-r3>A}=Jdi)9TÛ FڤxxÛ^%rkXdp*T^kX0 VJO5p#(^{lp 5aF,|5aVsZO#Pga]B;.E<0n;{#mz ,̌pF$uh;98/v3{™9N8FcV')MDi8h~x㯡>vz n }[|K!3JɣB'Bb:euzCK*>X)0d)\q}H`c6A/"zgR5)pɃ0UM^5HA KH!Uu3Ub9̩K +W;4sKw(A +54rg6ŔI Rpc)L}{ ͵פ">~m3_-ߖDt+|AbS/=ӹK~apF6BtQWo}m?E7(MQDi9ZigEe!זD{B~zް;5jɊ ׎ВSvowCmG3zTʭG lRMm}lĀ}Q0z' c\x q1.݉ q+<ƅǸ1.<ƅǸc*<ƅǸc\x q1nj%oتbF/Zk;3 K 3J3d)9/(=+vЮB^hVr]/R z.vЮ\h z 6D]xKq+tQ9'|(ȅ>3‰\8 0,ȅp"["zNSLNt 'rD.|p"‰\8 'rD.ȯypuw Ѓ:tRb(2gps۫_~"} 4k|\tx{>$O ysx ys!o.ͅ7Be? θ(!V{W/'y%y)GZq/gw?DTJE8%=F$rGJZce48|J:FE5͞UNyi~rrRGΣqQ;/gPV*NmnS<~)6jjFI{e:]W/yQܭѢ.2L') OT 4]TF,"QˀrGOP+!mri˳bgCXY( 5 `$ @c T*Γڙ%r8L@5~՛%ȍ+Rp%D EUJ1quu|@Qjd!,A$1< \\#84WA:¨ !Q栖JT#ا }KO 8jIyIKJE'3yJE-hq }c:Boxe:2!2m6T >b G4*DLqLzr`܈py"U4V DSj#2?I2 ш%#:,,YE0 7d윒 ?ψs7'Ò3=-;ڧ!=r3ko:޵Ez^` !|3Ҭ/V1=IݖJݟv-ib&G٭4ϋV(܇_6#wލle)΍nt%#Xs)ގR$ej]LJ$VRR0խ8-/i]:dVwk I+=F^lA)P_)Sq{|Hrk6],m-xގ(?-UUna^~6IRS4F0 W`{P׻534 i_xMWd7fwB/}])F!dWwpl&hHwDm$W+k&5j*Ee_HU'!uy]8ܘ>ɞgl;/fmB^=]"vRm)٨% #JX-crpyn)T 6G lOJO!-͞I?N xA 5f+U'/ jy¶wCxXld X=xL*2>WW?0{)a Y,|a>0e2g Yj`KPz<{:h-pK, (鼦w ws D8_ xPNMm~^@J0!eeUf*'gDb&Rq.S[nmY%Ը_Va~x1M&'Ǵ\%*A8;m̕R{b~q;Mt1Ij0$nj1  C!qÐaH܏ń)' qe{Skm-1h8 PA!<e>L2 GyTnXљGe?L%3vܙڃbw-Hfc2EEArĘrND Y猷[olPGJ,`V<,'(|p5s)1iq@Eи&!dR-{ /g~vcMw CLZm~SflSTuȮmuf,+.=i׈f2Q!_&]<lHt.88Ev2(Mk)e q \`TdA1˧FcΡ+Ƈ:~@CyyˋorN#oP1pa)rm6 dcAk4c5"9T,AN |V}l܏ijkDRV<4sq!jcF\gϣ<Ҡ:spI&hJY:90K^~ě g!47BgDq1;M ncP \n d tqEjrib>(Q&@9;mɢ'BLc2D̹&eN" tieO%9Y5**x?D@E&IlR%PNR-79xT-Ta0fC_%woqbW}"gI*Au&2Q:ӞP))U-䓛2[:nl1]0Cew F;IcցRA<R@$IYK%h8 Hi$3 d+/ȰFKdZQ%obѲa MᆂTlD^(DL//*1]`x90漢"ؓ[;v&HBNj3م2c2~D}p3E,#[Y!uUW49:'MJyA{d"YB(s6AipXL8+ұ2p V /kx/ss;&I;4dvgめ]\e8VJ=ݤ? k"xQ;2JY#0I ƇWOdxiqxiK5EC| 46­,ˠeȹ0Qs.B];a}=Bi>E UT4bBm (P2(t&o~޼[7oW ?h~P&4+YnWO]ߗz߮~hgecg)Gލ-[}p|h9}|Ms0[ O}Xϰ2n'7d\@O!Ӈf<8;^g.j?azw7,п/8f&)bQS'])3JVfk['+Ӎht!fO@22z#!$Sd?F{Xݯ롒!Yy-/_NU6kyde~j3$т C7Ï'%W# -., fi$dRU}|H؋['S$՝&kr9Cût&{ǡ&ϽS<E;&~5 $ %\L.n<:kjgOq5Prxlr!e갟WiN=ت|N#yĕhېssXK{q5 Zz=>[Kz9k*\]MjȰI`5Ҙ5UE/y${-jC)5ѫq./%eK_+gQ@|<9[`"~5s&7 }Mg/~8B"sP|iE Y(a'\ӖM߬].oPY=5Ӫ6%#$M@r ^'%U\X| 57U&|AjA(7zw&FnPyKͼՄRʅyGL'":=(μש|$D&3h4'hS^]wS¼%τQPPB9b͢BQgiC󌁭ͼ:yNJ!B0I`vQ1 ?MĔ ١}(Ta^KVyN=`29,'A$_aP<.j3h4@=6qisF7t`*̛稘2fN "UIk߇B 楅 ˄1'™V#>c߃2cC Lm5>]疸ehs33߇B 5c`xRKkRy3,`[KaL´5%S=(T`Qaj"x+$QrȄ5j0@ն $@K[E )k| }(Taޘ3TlFyfhCJUH@}%g4d:4c:| 5GS;'IYHhI<_!5t0߃B dm)<{E" BILׇB 歮9!@Ɇ`R- IVPyNyUTt@bF#X4W%RY`)>| 5gyfQ_5n9V둴XvsZmdhrO)tCG=Gڕ{6ʳ,a_rJW`J5@E;`_ +GVX^o_&[oIzdp]: ?^y-D.og <.[ :}(`^y -]DKP"6rB%S*DŽbB業]Sm@JX9NJڍwdPaPyC:S.1x5+ u&t&~!|KZ= Mot¦ڶ3?^ke`G^#)mQ*khگz&Y~]+[^`Zy{,W-b>G\hΣ9J')a) KT ]@F,"K9a(!VT)"j/+1l29uPoF"NI{;K&3e[i01 (Ho85TG f?` j- YHg-]Y!*~ Z:!xN} DqARy`' &eRj@I_>> >xs0 Dcf*H)qV !^xH*jE"ہ-zJ>%T{j#twcƣ$`(6RAZDk8&wʀ;LBEeE,ί6zMb}E+ד}+0vy {B|A*~ 1<[ʤliAsWl`a'Ji q+bGv=(s1S_1ynwy!Ff"Q[Jt_80Q4sy>[U2&`]%rRƺτ:a}5:AgYz{q֒l6GPZXTFY3N/ p#a5<|Q)gBS1Jq`\8ˋuiXc"0*cʐ [j9mMn6Mf;k:wrz-Ra˙%m`Ps 1h {hD1Lph[RCLK7੒`s^% \\@P&2ʁ,l.?waEcv>=s|g *N"1bH/{)جz }$ο{pp{8b*> {=PjQKT%zgyx wmȅ:G>ΚCs|VH7vdΛoBgv7qads4XL.{ޞr-E-,>0P E.l9d9'-+Sr9oBTSxn7|t5J[qpV"ΞNA vܞ\cA6ek@;zP`[+eĬ)HQ<ت ~7meJO䀉3轑,eorMѲ7Ѳ~: eH䢲V*BJ >Fbj# EuP?n~j5hZ&f׸5oNc/4y;g5yF`DmZ xsPM0`(Jc m5E^j! :D73I7ﵖ&RtHld(QA*qZdEЌp'CNmq2r=V L# *qC| 8Ƃx[j ZEIŖn[6LѯE3j9dT.Qr/#hNK+ЖY7)׫ R)P ՘wq| 4K)DI8ueƐΛMЃs&jjy9 PrJc58iF 109)@`s q9nI +-6kJǗU()ա(&t}fl49Ckz`*P"ls8s%{l6/Mh(99^m,@^Á,ޓ$s$ˆ0JLZA#cԋoŷo___UVK&4Z_cɨҔ\bqkcK듣/mf{Kj]Lgz~A]2Ƌ$GX墹o7Q12`撧pֶ[6w9 P&Cw8:墙[8QΞU٬oSQ|s9{5O#6Z|DeO6NܛYݖWQemN ⶘׼.豎&՝l->qYfts/5v#@-sr^cW8<*K͕\^bD^7&ȭe>۴)k^TTE5>3I6k]PX c6G,xbUi6@S,^((Q5#mp,9Kvʉ |bB*U m>L27,)_m}⣪2|9izꠒ]΀YRe`^9:d VMz((Vvs̅h#:D88b)H8.9$V@J1z#&#ԅ.bSܙ4IF[%j-U}ueM߹tƺɓ"IXfm;A'tXp  iX]wNz t;b)]#PՕusEW*ɵ $m۟hi-YUOێFOx#Wv[i &OPI4! TumZGv+`0?>,vxיu:y+N:8R~ߦTع!*p-Aí_rRc޹m-mDrMxوEՁ|ks2R،MlK0 ;֜, lzz]! xg$PI;MG^RC;9ӏ|qRHQ19͏.ynѮR}EO($pH\3"khZk/cJ]{XR!ǾixR,Yo 陯n@\ ~ }S]Y0)Z̦C گ?deÿ~ iB'OCW~('$9u&VJWjUYμ@};Ae ~0Y+-sz^e ߍaB=ݮ!ń>\a0жiTb[)lH=,o_QHŇGR%Y|~i\ʊ,iɏd8?/^]-;6ہ\S5gA'?: ?W@>&&M3~њHMqRʧH/(߬QT:,N1cmY?/Y|0K3 g!+G&* Oi4ڊ`i -n`4E/?`\ /G]  V \9_ơ Kmi}a`,0RGYS @Cnl",nR݁Lna i5J0n4л%cog_:Y`3,*n;zY@cDĞ1gTיw"_n>M |(#)Ϛ$`pby\ H s; ɠXIL$hFu 9\]esRJ1%8i?'l~~ͿyAsq l0[06,~ s+ᗞE6'9>jzMvPvBUH[[t>r2ȐJY].ls2Ֆ۹>d?GVh`e YlqPbeHRU/ݾ1՛͕-&Sǔw@Jk] wFf"INLB$إf ~:1~YXGźPW'#Jێ ::FV%t|7ؠ^{Ye*D{g("YmBzwJddhEb ~;܍rsok;>Row֓q!Pȵҏ &\^3x=0uO=ys][Z}ȽM\tɣ@o fk:;]?N//?bhh%>ݘy9ʇgOd|Oă*>[7˯ip14MWfHsnamdu%a_~i7G)S~ o7*G(=3u-#̅mdyU{u,;!A_![}7mw6 !|gx˒p3g1"o#E|+!-߼ dl5h4Y>y]FCh!D `3po6\lh%V`lRqY~[Y1gA_yyј=94a) 8lI68I3iM UaM: kc%}c´%J{f 2xF3N)8 %6FXp,fޞwZLmܷ[u<|!~$}ujlCn7 ݦc\֮n-YM oiڭ=e>`&b(Tj;`!9FXhd̶ OF߾4+ؖ31ņ8)bAYF}F48jSb &8…pVNd`-Re*OϬR+d2`!r j7aB[#`2ELYKfr>{,-] Lqs J&F%X7cJ4傂%He\ Y`$㌕sϑ ^Z ,h6xlG2rZi]rZ9h1!Ϟ%;yB[;rIdWYP*G*a?j%G J$dݑ(ȨLLb 9r͹%"孵0jM10 }Y}i I):nh-;瓑uTQɉdR{ߟj0ZXæbZvh(>wTSVUvelaØ#Uzy= 鍦*Ha}?O}uP )+*'1oKb/F_k +rmW_=_uJ73cenQ0 /R]{q P ; סr\XKJj=$ *MN&oLj#pF]seE'\Dw D8Am FS;vAj*+$9lcy-00yukMjH{z wG)xI,QTULEeEd'Ji5Qdo9gD 3hh$"O&B֒ZQ zه+eUF/ki HƂDw,{erg`;k5[|ϳnE~.L`G%:MZYy嫼5mЋTq/7/Tr;.E31/*`4)c<b6EQTm.m,K2XzLrwR\rv<꓊9%֩ޠ/IPBbb$!9/Bf*srFX0cFa`*rZ/QZE Cr9Y+W1"`ԚtY:DK'#XG#|HF`b%(2J+4$0.)*t)_ω88_fkrA~9JplO(Bl2A(HX)%QԑF3l'-̻s4QxI)Y[J(8#%zV J$;O96gyzDw'1/f=g֊'Qgy vxx|:[P*(sd(阄˜ȉ`-%Ԥl^Mwan'L"|ȹ @L(>f2PvjJ!SJ΂^$ct|l>C/݉6U3yt@ z;5p{[ioYQ,>/fVñt֐Ilw }\~]6eͼ̓(?iPz߸̜;䚢2A({'?Yڔ\Jq.\b"Hlw,qfQ >8 d%z"[AuoV SZI9KDq1M:LHl×نh슜-OswgZHtLj>B Ȱ<I( Z0Y1`\` 9'gQGX/_ui: Ɠ&Ee iDNYX'8:zg#DdbAtF 23t>„,6תRDl0)el _"wSl өz4ź bh6$>CRhHPJ ,VK0e;gO.>Բ5߇_6_C1#:2۳!tp½@'Xx7 zG^ ZMH[ch`N^F Sf.Ej,5 i4Cg'75fi<0$Mf2v^`xS!AMn_49ڱU} B82 ʎFPA䨥Ç[묞4Ul/7gNY d!,x%G&/I(d[S̽vdx4K4KR_UV-@a㫵VqA~^ew(CQercq*赗IX!c" J#KcЬAP(6؀ƚm! j[ӲuV\JN6Zu`IgxVCYPY A Uw^-YzKgf9)P',$![TS '#\I^iSp/릌nhO wʽ7ڶskOyڼpmk)bb3NOno_ [핚fEom3ċfr2:a=RusֽJӈlVK:o=:#6Ͱ յ^STV U蜢Z${6ZĬ\ 8DZ{q[5U욱l*}W xx1]m:=kHmc3A@QcP5U){zKbwB;?6<.i);eRf(/XPڢUnbԘSUjqeT9s9s`OcuǏnܺVR:y,ܵy797}~CweП!o Y(Aˏ-]m[ugҩ"|`$q3~ οzI'k|ӨUu|O,OA{wz^c^t^/Ԭū3*WW~lNjUJL Oɂt!e*THb29%apXw\W9&MņUϨ W7 "?v9TzSC{V)!䗯ԭ[k1.^\ oxO2 u$G,CqQIZDA]w芜zr<ge[׻?=[0Eځ+z?fg[+?a0O"s~ ?ʫiG>?]OgVC#vv=lg'1(KfkB܎GFrހ=GTIIg79~6=nqs7;S2$袜pIq@ !E K,U)y[$$I u06(ӱ90aw #j{|mxWkE[u}pMOgK;[`o v jv -|YfXk8' MhO~d2O,^w7|wf~y;G=.ٍan ,ewmo޿>`;wC׻ۥu̇"x@IU Y$_-_; z_nj}qc4is' +o oZgxϧw΋v\'˺+-e=yz<agca7xfSjܘCE%Vv7UBF& YFʬ |۲i#GZc"vPT - {ꀷOr+t@o2g}v4++gHz_HJr?&iٟiI3h!hnl{n>|L#yq0{ob-Wg&h5I%+wF5[0v.*w9Q%;psy_W߾|\mIIBly孌t+3WZY{wm5V{i+d6 I-]=g:ZF+SG(0uTh+Tro?Vx$B^1$B ֞x^m8C [_]6b/֨Z=|iޙtx.F>C^_w7/ux4- o0.{pѳy{W"ڭxl+0+M[lM1=@B߹NPVӄCu滵Т+vlQ?4MWz\#s2{`|9`E=BꅁՃĠ1H-"z0;ŔԳLjmW +Y:`z~[d;'-k1ׯ8,14Thl>ҌNNP)LH yszĽ%ՕIa xSP#i^\R2Am;E/oFn߱jLfb2.qv$]ٲSUZ1e5Z<$zto\5OJ'F_._"@hXLQneΤGk1 RE\d*Q=Dx^x8 $M?Edڛ[\w:H_r$wZlN0>5e61x>୍a1@ȨU[$Gr%lrJ˻4/"c=mqOmh?K+볛 bjo^ }wͶͯGG[2mzRqz~Lχ6|Kmжvc;B!ɵTV{ NBaI Z9NY/ ZZi4n0X2c/'Ցdgx<(WZב~M \m`l㫛i&#yԻuF9V7Q6oa~:0 8`@>\n_M.!'-ͅi],+M6f:C*C%6Lfv~}fp2x>͸u2iԒ tPi: )Q&,0''[glF\"ݽۓ;??$-j$;w>C}<6tр%~e'%;Vs'm+\ٺI^iZ4{"K`|4'nPy@:kU9A$l㕷Mг oUc:t {.i0b R28˓6BL霹,#e\+i-cP'W}iɬۤZwkD2mʶ ysIOOjjB =1+ˀ^LNÕbheJq+G:Slfzh,Ș)`H7"KǂǤgN;h#:;>cZ3Lm%ث0HֆyFQ%s syއKV-4dƛBNikLR&z˘ˆsxpH+ʅՆа ׮otA%=ҵ2rQ\hArA!@r$:zّE%aTI2w܄-wrj}dH2-\ɟ`sd.l!8FPx$TRf[߱Rh8 rHdEA~22IP>KXYgՆg-}0[t"!Α[h"V@g6 ]~} 2,J04qP!qHcR%P֐|-Z!{MiTs-#gcN ;HKN.1EV0&C*q˧gqWd>hꯙEaBm:a4F0L6AF *X4#"# $-UA ,\<U=gE؟n3b`}x#3@Tst!c&8g5/Hśss&g⒥$eOB/2M Jd!̍@Zi2yq|pCq ,(PIe@j(ތL& بP1%",'9Fl2F%dZLޛ$SSf4)5 `:.T)Ĭ!2JfSY)K[`iC8E¼}L|' l\QKCŹRKMf,Sq%l7pBMBY z sW ʎMD j$N'v>)xE\k󣵒pl9g-dmStp1pSfNbU)^#)҈ 9R [~0 $77V"3*kכ;mߐosW6Jl87IY1-h-Jd_~?&Xq8gDsAv|::o?X(ӷ$7¬ำN]15.a˙t°M4?l[d{,kuxSؼ泛Nhy0J&Cn<&߀܁}nqI-^(J 1[/Ǵ W*/k7y%6L* Iږ6P4AR8&/Ć Ϝ 2n/Ɂǘ-!![Bn(0C@٩17U "63uQ;c,SBdVR^@DT.E+DTE&|=Zk 漕'P,tek=LsL̑` t.GcyH9d; "S!URiAxɭqAsydnP8Q1sʠ ^ ”)2h2a ]V筪PczQsBxS#d"GjRk3Re#ܓLc0Vi?מev /m-+d7ouw`h̑]wے╍Fn&$,6(hp*YN5[U?}XiLwüPO&]hoϧׂsuvO:vBǷd0>F7WoֿY-^T[E:?x oC_)do^<ݹq\e'{5.и'K[AO-ASI 'Rp8 %9fH)}ƺrBr 4[YA B͗hEXъ]V\}AP _P-ܢF@ rU*t@]a_e]ז |zlBx΂D:`(Bk3rV!:Z 6qh0KXIN,(c:;nKg-*9j3aA*Gr&L?nƒڃLԄGVu} [es`f{fU]_'/C.S7T!ɵUc KuF3paBߒ@A4Wߔ";Zӈ%S5>kH+Zd$}TVxKL;'9 |Dlb,}2 O[5(cQG<71 ;V xԫ.w(s m`QJsmM;=хI3NN]3ӕ,e{bg[glvBdDSCrhjpWNϜp8MMjYOSjhj(km$Řu]Xl OnIZ-˺ZV.`<D?Yaz3j`X_Lm wZhf t+Dž'#ԑ_Ĭ-lf1?fe:zZ`^t|WT_g/}+7Aj^C!=ilz6TL]hs&"-7wM\Ya5%gȜd<2[%rLiY0wҐ?U?+qhk7|1[.Ӵ{kӛ&Æ+a\e !12TƐRhWƐR-2T6RC*cHe !12N(i5m]kyxMZ^ky{}gTNΦZmyܦmyfajg?H2 ьrM9Ԙ`TTRuI2CS"n-=zyzX! MB$L赑y̡ڏ)PuE_F.gf/~ۛ9I}uk_O7˺tK3*Z1{>|E\ΥlEJjG,0/}*qlcQHWGJ\ƻ/e)wm-y܄[ߞջ~xI/BE$PތFYy}Ȋ>8#Emu7u 9i&gV(,rs eHч8@YJ Nmoo:u(f"hé 71Vy>]8\Y委i 69c& ِrxym9 λKjk0% aڄ m$tA$Eor\K)HHV؉PkOz<7Ag4nI3\??6LЮJGfʺ dK -2P4ƌ,uzwda1XŁ`*9Sĕ)eFM^!J,.,KjcƜiPje2 RH!Z:knd1`7biۅmشV-ZZvփQƳ ݄'&q|s?]ܷכhr8XTko=O)4~C4MA͛S^GKȪlH>N)׻5Xnhiۇ ]mo6]]ZˬY tYW `W'6jol[{o}`hucڠz.Nq:ȥv on2ّb+6=t_rCZVJn'tY$ͪʇ<ЌNޠ}LH {a>S[$yǡmc>'](=4[l]{GM`tOx#LMnǑxNPiY!@'}}- (<.;'nPԤf3ގ6N:vy:'Yd)[ks6J[&'{wqÐ3s6 +JsQZ9YŃK]#X "R4Gv߸~xFp[p|Q<6cJ(*aתAįD 4euP bgĘYvØI:v6EھSg,R1̔v-RI)}j~Iwn`'u420ȴVN0,ܙ^ ; 娒[Zpt\a; hlx~߫3cyh3'S:]xŭκ?TyC_/׿$ >ȭΛBg/$cWӞ_:LZP9t p~ ~+fo0:juNvlgv;0gE-o`Kt"(|D@sL.$-<#% -yiYmP218rcDN$[UNAQv2Srq<ލqY 9@b ʬ7bbZqLc~ni[M56|mQPKׂ(f{XTn*<|kG!VIasͤ,4y۠B!k}péy,>Ҙڔ2e"%RpHS"bEKtRY>g.K MH%TY,3gB" 4v%ΞW!퀊g<?9h#$z) x)0sIRs8ŀC|I&&'mm//Tޡ) )6JbchX:PYC1YDdYKU,ijB^ղ%r_֧_4s lA s@Mb(:1trle@R4sESRKl 5WW}AYak0LWL^-Eb8 .Adӈ\JiǸV9k$2ާ/5=-;ö>wKml\o]J-Zq-%JX&d:~N]t=\lMVQ\)7bhe_0OZ%f3E5?t+"E6C*mvy9yxP)Df,'-`ͦ0ņ_4KϲX>Cp)UߒoFnX'%S I.C ׃EmudKyƨ$z~swNȚdyn UC )pO3Ğp"x{6drӃQ̈0:JR(ω/{ƮOIpqWwb6Nّ'NeY-rK*jl@~u:W@z!"CWZc* b/"cfW7~U h,֮vr'M]/"Wyr]NTlb+ N&/VH;t8|qdO9nliЮG X KW0X]gT),9 Mأ8X%0ZK"n\Z]&"dx= W$aSXrK[i#}+D!"Pˁ̅ElAO18 ǤRX$)K0 d7U'n8sT 7IJm7ϳ1*Z@:hP42dXP3#Sv: H Hgs`fmO[i'}3BQ&AIrXUEJ/PL'dvQppf`1.Oܳ:SrRGc}{?YGul3n 1!eY9˅B[ قpfdaNj)<G> CE;!~?Zth{W3uRԫG j8Sn9T'[=KJCJ+H,WcQhPkw*q:^\t%%OS8xBT٣8=R_l\YK56V$8gLorGd4Š}V%`n^y5Dߴ3L"*|7 X7Xo[OPJgz޶ t^]6S[[6iͅc ..n0`ݘ%(#pUox$/<*4] W)AO ':xP ȄћX\F.`Ui,> -Ō9]$r (\)P!EYSrU0Nc4-92[!̧ |Ɉ/'.B5zv>60o {0å_WnW^G XoBmVO+\$];?͓[_q'AcyZ3jm͖?s2;6(%l?vxS_R䍖f[ݿct҄Z=UhU`|ozOңjnzlTC)%C7%B찈7"!t:' l@QGS@V>`3@ W[ 3{q~hUjr;ߤ aYgmO_nߏpJ!X^ٵGa9˚vu@օKyW^Mi]{_ fc.fNڍP)ͅ'$'I'\VIMd=‚ 4, ~0m|iΡo~F8,y{]i ӢY%y/ө}C&l4;v߃e0|?܊ m/lXQ\(jmbSJp;ՕU.m< \.9^Ȗms\Uj:,ץYnk |N8|w!4 8>Py떫nk[+ jxfV̸5Ŧava&g'Vl&k=Ñ֏x 6u t2Sj>2|7P> ;3ޖNZXv: @yʈJy',/aN(I" i>GL0wU(.-atBm}oyK٦PhOd5%XDTo^ݭ[47Y`u<\cVe,&֊pEi!bQtthdٓ!dt 0½ E{'y(t6cIy1YRҖXL<!wBp|PwQ~)t X@b]%8l! ؘ6EyԠ.~0bQ OҢl0R &+!!DlsWJIKUts PmVy0Kh2t}hdI%]VAaKͥhx̓tK dV 8DP$,࿕kLvtDh[V쮛q˕-F߻2fV+z@(۽iNPgc֟}c"bٿTR>01ymArъwU2?+Az=:j6GxFb 2w[Hl1'ݠl>}|z `%$2gYgOotЏ: ;uҏz 0[N?O|K1RԈAdEٙ"T:yx)Dƃ7NKQ!~AuF_7nGV]"پp\ׯqoU 92@1TJ'|m96dPsK4.+}.z!Vf6jZs:Zs}5tH$݄妰;Hwy'.&A&.\b)b-3 0ƫXM9ՙQǠ.$`S`-{n:K0{oRA V@:029ntzP? Z^l5ײ6c=5S_` &d \ݿ QD,cEHIGhgXn|u4ӐcC5#n~Uǚcbt!]`A"5!am3Z%Hԕ_ߧ_?׿JG&8,L?~^ KjcUM華(ʵPkͮdDc5!{$ݵTG&1 l#Rj,n4&k0ɚT+ .˼Y&nT_7duM'SDΰI" Cxԓ˭IF7tXor:,ac:-<1琮em|NsIA|ٛxRA߹NZgtVHLG/D`pGr@HdB$UTɒǔKLXJx SQ5bET`2*2K(-ꂹ(pr(W(Mdrp*W^*HB٤Ln&NEw1ئN:>~c/a(f>'rv$ZEb=$rcSh(YQ JX)e "'M!p*F\5TMJR\:|bqIJVJ lT%dfq;yErv 'ѫ<bm~^s'u6om]^qUTG0rq[p[~/3A/33 WY<=pVRbW*5}t£;x:[{Mw67VٝyΏFׇvo}u3y3=px9s/GO'^5׏7{myLtkB_/׿5ruX Gm;Go.k4CۭꋡVi:mJH}7]럄3>uODxy{3MMsHi8ȷXgojŘMes v7n\ G{K;Vν9y;X1oU{K>}[=~f/pRY5p!b Q4 \iEsRTj}ŦRxQ.NwL2F+V!)Z+\j| LE'?-mnd3=9j\zr8g-^~jsQMW> j-5]t8:pTH^zrM|2R0LX"zel&q6ex[iƱk!6^ q\ Ϻ>9!;g,/<}K{/ta2}r3ISY IbJM Fd'[e>7%"u-0fEqxO1Q +ؐӀ 8 dȔTIW,]2%!R$#lChaa϶/Gm٘w ȡu֨cAEY!Ea6ـU٥ā8xU8-miimag08N0>VE{[:؜ҧ ztNI*?h\)ĆjNudy4&>j<̞}X &ͮwkrLq5 xw7oGG9[-㫪,Edzkl5`Yo=nJʗ_oZVJdy:R·h0ֲnF݀ŢL"B[3I=:Jj6%%aA !&mJ d@g>roیt#IS-LGN5-Q\=5yߺl"-se9-t]}|c╍achg1Pޫ۩P/rxXuiB6-ָjӫAoVUnHWjsJ"-Y w󁺯mq`oaW DiDs9[mPj=BE#^: љض71!FyPƱr *چM_M@ z=22A}R[}RmٗIިˤD=2efXfW0wK<`ߤz/K9ڱL-zME)b*] 'C$ų%oNtsD@YzqVEc!&3KxKHCPMy᳀ElQw)zOF8R0gke.x'lL{@m>h:_=8Z&Zwxz5q5u xӟh]g7Qzw_Ow*m#8Umvo|a>r\`^A{ն8^lGܶ=B}TMe(j㴒z9^8;$Mh<ݱ9{a0^Ӆ(YvS;v-K)((wlqEOGQ,,Z&* mP Njk! R Rhyn'˅wCg1xY 롐w3~5PoL8a.@&uRxg_Үom{KjWv:S8$HBj CBZ6{4Yb,6Ǯ#]4:%7}ɥ %iMf?UIhl0`quXQZEsLm½ Ŋ)Ork?_͌UQ,T\HdUqq) Tc&I"|`%rŨo4jk^~$ZV:,{e GqcH+jXC%4K is9i8Ƅl6`h8&+!E( —TEJ].!hʣ11Lgus2w s4|:_NT4dH UT.cbE/d4A(t%E6A0C0 NI4c70N|?v8@xz |<~~Tx6 :5eյ~*9k}_MǺd XtQGkm9)\..Ap(hkcع=!eR@cQ 0!DiDyqn %6˿bݓC!1񥇇#Z*I``qQ$0Ti% =J1AJUEWof+} [ʞBt%u!erbzgj:o:tCJ)9b[1RBl>gNbTryi4q:룁>m>B4NMUfSՇ=&#lyͽYC=74,E`hsVr3} ($SV<6Yq 2CΣ8F Ζ~pԏ/TP?ךRMJ](> I(f J)RKqc$IA|z0'_LM5lb-J5Zk9cjK 0A3 ރ*p8E̦dnp?&LWkk=)c NP.x#&r&Z/tFpW3Oc] A,fS#C1U7ST ISIZF)e>h$sh1s1y\)}MC~:bIYviuW0!!aqMBŝz+*I현xsÓ:Hy&6TꄚO&ۋ3 AA%=y^haD;C8"$(FB 16߰yDdH97R/|0Ͳޞۊq1&ps|;9}aipۚwva솲cN/g=KO=9?;49X@MX3Rؒ Xk.Dˌq:=tf@4SNLb 2bev[ 4Ã`A&)[;B@<GGHy^ΎN3 ԉ )9+c~W#t6@:"f.Pl$g `s>@[mMB2'ҌdYC@pfC8s`O1VަXJ`Gg+t_?qۅw 6|k!j{/lթuǔ[h>it &fi}*>'8YGihk .s%DbiVfC8a}[ 3L&m_'?o ]mOjjԬ/J44dmC 9bq"`?rw@a)˱8t Ʈ|J<%iv%m--H҄%?45&_&s5BSnd IpT!Q ) k)/J1oyؒl(9x]¹}ULR C&Ps83pi^}wsoSR|~n2KNl@X&9M^KM5ls$/4b>c e<} %}0 S/&*MU#O("Q|PRJ6(c|{ɱ| mHXEuѪe&odi 5]UFJEp BnAޖ OhR BXHB17KPJ/d7F4ʧ#OyGO+-dcqRO|Z&tbi{7#DŽ6k۸"}9Sg=7,a`汱y\)}MC~:bIY_L[[~]„r c&n}/eԵ3m*.Jd'x6oR ctyx~Wߙ٘ !Ɖj)F9KG4m6崳,o V[k̥ԖDVJ;[`$5O&foL0^yA^*GXn.~AIeXT\>A1bSa4 `\w rZ 9揰}`26KWZ6SPqDʑC睬PE zGz; {=D gUgӬKp4M[KGbp9Z j!/UvoW9'nHM[х((f X 62Z&R&ÐnHBƄj6Ԛ;˝F"d2 g1eW Y׬kK9P]ƴmc? ?9=:,Yz`|2`j'I^Juٓ$Zp]>- DZ3ulRJ/&J)Xm1El &XEJiުZlbc' PF YGEb*(H%e^P|$B{f&U ~>t~t*!^?9;[{Ѷ}믃y]սk^1N8?%h9<9ǐA~g@G3:ФA40 $LH,I%ӟ>§F40muuGrMG9ӈR_o<"ek2$Hٔ~Qpa?gB٥y2m }!0:~ʛf`ؒ#_۳4cض>mcض>R>}zW|b:k"fN?́b g"fЙ2@8ef0Z g]d8à `8]kÜ`qMy so(}S?plύ3[{u:;9̀tj#tR'6'L ]Tٙ:ץe9d lR0ti(Ĉl#h\_?Ez+(I@XKv$g63 4n}l= 5H=pttvBOWa휅r.O?\C^VKvD~qd2K@KI*HCndk?-9ct >1&a}ĉ=1W;v ;% tw|u$K0ܢ5b&4TMX2&T&_0lV6-&M>G)^Bt!& Vje6tNo ba٬y}[~P2URt>)8| Y_|/mL%Av[26`sʼn;|?Ca)˱8t Ʈ|J<%A 2$o\~hkmF&s5BSn;!cmnzɤZ [Mm8Ҩ@imJvTncUDGh51B)SFyZc͠*G:Ee؂l^Z6h_l/נ+ ŠQ[_sؼ' ^xsĵe7'>MZٜ4)+m$GE.#xP؇`g9^4xRI^I*`SeYGZr.I1dfBr|FDWN<}( HuW_a%+pTI✻ !UݶM3{}Sz3[,7CwGizY3Fӎ9onݶ[A|:<3hO3,M ((aôW9XEv׀:㛿'UkJO+2GmFlH{V3?kWYu$!G߼pFi'ɽRډCY.'oP`ɸ^mnb0a*,eNWLi)SwJmZڶ}EE(pPk91 /g( 29D%-0SVYv R &N &dH$Dy:4#΅i1uW6K8i>?/x:zǓtC횡i|ff#%5Ool! 6\zcb*) W$YEr\F9>%ы`-;Nů Km`u4[mMr=)ZcKnQ'+})rKN=d2O:+_o@'=6Em+zo ~w#n_5W7rh@Ɯ>0|JPvav-5 QF:#\@e/q"5'C-aPzSell]azuh}{H6 ͺClYajɭͻ=4Oz~h| v7ZlouØ<g_=Ҵas>̳fWM ̨%Mo7K t- ˭7wΚ3uiنEgG5~/&~|ph% ȫRN.(boQ_Ȣݱ ~66=xMQ2*I\ld|z3-xe-zqGYsz;8`\ӭE8\Kr{FTw4x8=J$%CpU*?@"1$RbdxDFtTP$H߂BS(t"8Os\D[t󽽂˛E6}TzC,4lj凕)g䢨&%yȹ#Ņ2f8V9=Z(-/Rz! &C!D=>5$X1$E:(`Tid,FNlWbq*Ba, /! g3ts=&_FG져 )rH)#ꀷѳ#)c!k<Ek!60ceRy NBPCH L#x #v1r#Æq.(ڢG^-UIHeӂQ@B$^ȴ!|BxwՓNI2c3"!pjT,eAry QFI{4pbhf=ozI%KJZ%BQЕ/:4tg%K()\tyḷ.WOT> J4T+_Yu%hD; a 9 %CAhM3^mEbb tgd: 1YЇ=_on_yb+NO"QWpE֯Uub%ӮK}lO(0XM/7 qC 4b&(ř@Q,WE)AQ1xΒj=+!7yd[@;uU6_qɋu4P&-^gf5Jgg ̈́wcv-d,o#{0e֪gNYUicDNzTk9s=4 c=n|SZԬp9-tpRl es&'!jcTBjƒEy7PE%ict`s4LDôNKW6kŀiZq~wۚRqjT&kI^jS뱪.[Bvz{`Cuʯy.|.;b_LCga5$_-|zO9 yrn# /oo2J0M1FeL_~pŝe8+cCM3uW+i֋l;W%Y(ךVb㤫_3]Hr͟*RHxb*'E9y.s*wEH,Lyq&ЮX4HZ4Jޢy!B)%L.yqVVRo ꉲkWԜ<?~ds׬+2 mSFyxG;x"Sh]_ Ꟗhje)J-!) $c5Y5X Ÿ5 ȃ?uD>!Y\Գ5i%`o bvW)ElΠiWR RK)^0K MGkaQCTv@ΚsѢة>etK2N 8;@/&@LMfى2)؏?e^wH]0o% ,D; D锨ăCx #(|r+s]`{]v7vZ?Zsq08̱]~ko+\}q<2,i8y xa7槦&Ayߓ^ wVg-}~~5Ko.XeOh^Ǧ6ݚQ4]=uVONi\0@ЕuVQDwV^׺P&hy&޺MإBiJڡk*|D9V F͞}Iy~k3ֱrjXSE9-8=eW_ )ߓ::)NjoUtCVyDo)234) hhqP2dTG42@Efr! (wTD R/$@1͎Kp/g8V2o(`RvL[! d*Γ)*֒dbIC(2rpf a:*eGbXB| ި. /,gi(g\ *FUn^@7TSϬOĀB(Ǎ Y<Iyx*"˥R/%Es_KnӯN #Ƴ5zSsJs#:+OLN[/<TNQo'vZx]f '@` ldp<%R%NHLQpwԙԤ=pV(' Pv |\nMG4DcQ6S >!G4*NCy.q WoODir=\Y[Mbs02CE3gSOyD k {׋G9X+c޷E:qȩ Kwz;z{an.#%rjE";z_ݾ q":" 3!\ݙH-LFE>D턈 +ﺶrqBlZp1 ̍J3Hwg+FE٘i;I w6BR:|R]5 ʞQHߕpeLEI04itYJPwMh{9=;PUӦdZvvA޴4ۏ2I:Q=5 [#%}ҊgW[_m'/E)+53adUQ ̯_ !qdLTэ5Vr R:>=_ 1[ƓC.B9*7MX D(9x3zS5_$l#HFx+`y"SEM&D*rURp_gUWwS#ar2kƣS|Pd5HɃ2sXTeC26I@u򆷠U`PϖlW/4 }{džFYƩKfT:OtMt+tĽs+61x>SF0<0ܐp)g:GG9^?n쪙 }F;?;]NXþ$cls= \!y]J)g&wFE&$Z_lQDA pG/@x4rhQra0g۵-F>|G ]7mP.rIO6 V&ΙwJ^P❨x,72h!ȒdVЋ%8:{YRr 0rDfNk>ޛ_bK{2ILY̴(A`XvBCRP9Qz`us*Nsm.Ip ͞|A`N# Z:X]9,[qW Ǔxw%#޻Xz@u̚WI`oYDrm;kF-uw9;k=sFx~Ӑ޾qMI 1t*gn7"2r&uP}q(ˌ^.Ioe;AI-wL=iȬU]Ś\JZe =rRyo9@"\]&S:6^^l5rCu&_ڃص wdtvTxL_)Lo,zaz|7k;\6>ȽM^Ih$()D xhtC|i;Gvf;!lhdݷOf~|xrk#C %oÚyB]y@x ٨gs̍UسbC@" akF G_#8mos#}]g3.=:7l<@=\W@[͘j %ٔ"I JG}&Z^G}(vl֪$R֞sD56^?. u NLE=_p{vg IQ}AiX`o +3!R@ZKAdY%CDMX:g}U5D. &R) h&G# !e V9xלڪV>=9dZo9DTИĠgC 'nnNVڸQq !}OA&C&&FT$Ap$E/8,S}ä9a/۬X?ue{k"gL 5)fmD.*뤹 C 'WyЄ F~aE{I%vvzGuе/ڞJ)V=k|?ZK\Jlpu:ǐgMb&4"6Z{n,; Ss^4Tׅa3__}QX'/-*F&F)R@ef}uv^gzDh~{&F֯xدӬ=ZjͬR\o)ⷤ?Iɸ)[id~.u`]5wN"!G&'~ ڨF:ZE*&<]]$l%͇Ma8R75. w^4,^H|E' kjT ) ҺYVʛluT,p-",7Gm̢2/2zesb:q_R4jlPi6B 娌Apܭ@jO~,Yi)E#I#j {Խ{;|1VL'8W\Pn"XK݅\/q7QkzB}wظ[rԕquUȕRUVaU=+F uUFs15L^*JuuU;TWJp&/ ]^*uUΣ+R{TWZZmW~B<8 D0W`$=Oϣ6mxILˮp݇F+,Dhyac&XBfG}J#&~0>OݞI ~(a;7iڽUiٷ((f]FsiQm2oI|o/D \!8n}=0FĕVnwˤݕ J&q fmP,&a>rk̺25Ӣdp6W}|[ο_uu~|$<%W{֊%C9lpfϠl{\ovJ=ŗ,܌bV  Ĥ|owΉ-U||5XfqaqQԓ4ܕCYbCPVrgF??YnQ_;2?M~G|D/ϣm|>[cv|G .njevY 8,8'WNJc%mtb'>h!Ew-=nH>6A28@0i9i[.Heo0(IVJYUJeS3_0`0(Y>(⪛iNpIJF dZuj`YTnۉ;buuHZ?h%рd!Y • Tq7J_.BjTqֈ{hӼӻ[lk:qnؼ^{{a;ϦK֛!jے.+fMp9oVBUOWYO~> ypnWk]On^d=nADlMUrbL%q'D԰bC.j`p$ȐV'vyj^UPMڠkodQƼ GYq uϚp: 2V vPg,Tl°GM[Wr'})O\-jq`\r+ՕyN)Ǯg>{ճ>Bꅷ}Ă2p'hrMOYc7_ pN Cznh V,aiVrkWvPrc +vVI\ofŽH _q/R\Ese9QA?%Ih,JwoV!uvexS $R6&0*UI*Il1QR;Q8Xbkx\B Vum~f `?mTQ~>oaϮ^_NI?.i\\Mۣ,hl̟ Oצ/z ?=8|^Kˮ_.mBC(eBԧ\vq*jClQd$c崛2_'KzݔzR4f+zrG Jgu֎fٕTă})װh~<%PnXQ Eڤg[ۡ!;LAa[(6p>NӼYuz\AgRIM֖.۠CJ8t3N~̧ LO\vI8Ւ!a:\Im[Cd&/κc r+3'͌OI&4KXƊ;7q2?uޢ{ݗ6])1x+jO>CY'QЦ *z$M3-2Gf^*tҹbU0:'>fK Co GJVW2U1 e '-P+9dDp"$HL2GotTIO%(Ŕs5@ 9t- ڄvyƢb:$A8fG7 ”f|l̈́aKB"7[u@Òl\,16 )LΑar2kƣS|Pd5HAH9,S˕cX-͇W%*iĈܔѕ_Y?ug"{Qnj#7".oDT{Fc)xZSywLxIi߬YƔq}j7f [GG#-}rƑZH!L P$6 (b΋ek`Gf]vݚOr T ) ҺYVʗ6PA=Џ00DK}E3HvttEӵѰ:RbA+ZT(;NvFO'[pʝ|tG ;O BſFyÄwd;)l'l'l'*%]A-}1Y HW󨳗K4 @`.\}*JC$T%G2+ g.$Br(ȈFs:9.#}@ |m%ď>"]C#[+}]T}}f~u,C:0i-P 'Y!sfUru% U'399n*hT90W;78|z8.g77 Od5tp# ~hUxɗ.24qz"OK2E&a'ıٯJq̧ϢFD4}7S)}#"3!gr{PcjC 4eګ%)Y&)t Nlc"0y#Vav*5)`pܦ=pcp[,Fk29rbJg ]Vgji_C aؑ~7iε; ^5l'b:*T|ޣ͔~Cs2ѿ4{,ܿC'Z琮>-Q[j!4nN,^~B.{Z7Ժ}ҺY>= D6"Ӊ?>mԲJ[w{lxWdG[-C %uxg_7i6 51uh[mZ(BOw)sȨĨ0S0r_R1*~Nd='f^ 2(W|-k_-p^c"#Apv_M]C'w`tK6a`GgBG 0S&,$i#0csM6$Qbr"nJ"^Eo P9NWMMm7Miú j C[:EOvCFNP?K$vx^̛Q*9PB 5RiN+ts !AP1TMY4$r4 `̒ 2=)`l.EIQss.mӣFQ/Ela58Be[--YzZ„ۗ f//*/4n:>,^ŎF6KY.xf 6ycdQŖŖ(30`{!E1RkFh2=2YM`u)`Ů&igwq\jWӎslhGVFι04&.1(8_~It "1Yusp1d N< ٚIM#(|*@:W{'I5qNÖԯP)q(~kueG8Zc2D*lI"j^q> #s>#()%90,D`ZD1mqR p \',tȄ!@Ӑ"yV9m 묦%ES..vq&|qKWEXX4cY^c;ڊwow[C+!|:gǐgMb&4"6Z{n,φන]Ux(»_tSW{^Zp|5Żs|wG>ŻEk Ǜj5}l/q$ۺr+?ZNQZ5H̸uM*#g$,j*Wǩ޳ܺ'N]Wik.BNF\k5r94O:6(ʋZ+oIDY`1!bJFid2:»u9(PJsGyaq%&Ll>=|̎Zq>1=uݦ۪A1c 03~&p g:ڄU:ڤ;T~` ZX&:DMI*8Q93P dƛBNikLR&z˘ˆۥCȁ~BJQől耎Y|i+lL$"$44;~a筯̾C׹/őGCY;_LT$ɑ. BׂV (.)Nȝ( [4Ր{$<#d[X?\߶lEtfcmxTRf[7NXv)e9qR0AOX&I>0l'j!jY?^A#ZR<K} -G@ Y\)Q H!kgQf^*IQXeE!P߆_v+R|[ʻe_E 7(MԞ ! c, hIR=W+,Оj]5?D䇜%˛atN9gSHH#+>t ̈́RH#…j{(Cd(3. O o46EΑVڴMPf'/k^ƈΓDN ܠAAe.h@+r721h2|BGen e񡩝M2H+,$m@xH(Ѣ DEp~+{1Y:w4A<A03FzTs p\oYKajC<2z2䖼CPO$!=a;8m; O|# t~ekyq~ٽr͐r2MKa0I~l`_FZXZD Ⱥp™"11D@Vc 3zJO$TXsps9 SB #%( BPRj45- "tJT@ʩP)ъ:OuHcPv{UV x28 EWq!3/67\]ekn*5k˺Wx #$fj(JnаM/]ϦwHG8ErAI[D |Z%zBAAy@@CY>> )'*9J 29ՒK= k"扁p٨I@iG#u oN.c|2<}JlB$ `M3g 6W3ۮW3Rz Uiv6U*T}Pqp/b_2Kގ{\mE"ql C>agXڏI9r5٪j 1Ί/0db 5Epb6'G%!A[( Vi+\*F|ݠ4KbzGqÇNa-ԿffL"`, `7NzRol]dP^#+P%Z3+"7>Jrͺ V3M&@W<{4J{4oѣAOH!u"TtF]eryg0ڗß2O|*8XW=LuT,S `B" a͈"^ 7DHaEk ,$$W1&:JŔ kVrx+LnV0Ϻ׷#d? s_.>._c;n]c._5? %,^>yIO%f`g]Ykֵ?ڟuϺg]ƖH́ݸX׊߫?6> [KFb|aQyR:f):`\Z'A,%GEqh˼0TRVF{ <4߹dSAJq6ey..tpҐq}U4/8K J\YRa,=z.j[8 yTW::=[tNI3Y˵pAb"&?ygLDCizScPSggYx!BՔd1*\LVX2e"hNɐSY$^/DqŜi80JRQk"tDXh}# ?-d7߯)u^.3%C3 IX]pK_Quh=|z'+pCPQdy'xMsrDD51^QDžFe !88JQ#e pgt:E!2aE  а4FaY GRRrIADN qj4!yJCś`|CBTuuX-XT'_cN3:%b iP iTYgAjϖ1\/6ɘZ*EWR: ^9mE2< KRA)a9 uSā^Z"cs$ȏӲvW\sr6^h ZC[ysaO0\38Qr~=mmHOulsOG(GvaevFӎX7o #yOZ|),3@YX)X\5z09" g+H¿DL:'݅5M̤@f0R]ݢS?S{ Wg&@5W*>n7_u>B[z?@ɸ^yshM 0S&Lt:Eq={Zج嵰ۗlZlJ;-*19O&\QdrbY*۹&9A`O'v*ɷQ|LjIJ^qXG1 epƭrT3%{&PZrߨfZm $F@)C!ǂAV )API5#爲g{Qʫ8㹺օՅ{3)>yd8/"}p(h4<gzɑ_깳#x5؇fFcA]*Z]^`ґ);J;"q| Fs &CR)dK&G# pN爙Ec+kja:I[e_v58UkZ[ Z{@8>Z!ӊ9&ȥJxx&/? ]D fd*!2o%5 QFMt]!eX$]uMLbFjD]Y#A#[PK}Fbd5/uH9>#DJI% ZpU51m|cT p \L Y$!Өhc$OZ ϳFFq UҋuRbոTh*E3A/)i\9-]|s2YFEkc)Šϡ{9Si *bcilيկ2lo|K?~5ETYh9h@xgmuY* I?8We_}7eWMb!u&*&FZ@z|wzig+sz}5˺q|xݖY1ٗ{з+͢^fK r3M/Rǔ`fҮp%cP$JR$%S"{>rS >shy˷saéenlhшgSTƮCMnhӊl=qSˬPCjnhE/p<[[GW.4w']33c\YN:l&1f,I#% 8wx2ԅJ[ƺQ TZ N>4P;J @.]m)"KhE4J[&'ywq2g2XlVJ,BV`u{"l閘3yܟ,JqjI)<m ;H)3Wg2粤}oY* UiN>L"~X/t~?79_k_i:{_$-nf,r\_Q+y]>J 7pAbR$ʷxlh69C?8Tp5wx[ҴC6^H:[ $J:!aH]ۙ ¹~u6eɼ_+X%153su&(t`!?}g?!']0P:cڔ0Rw$,J뢖"^jGB́hiBB="% J<o&)B9Cv&$99Ac-r:V[h>E@w$vz zd`*glb#SL_tyNlӠ_Lg~YH%sch~=U["ƽ .3~x5a}uS/bgFddHpq΁k^,* 75Y_xW\2DU~7idG͠D 98'Mi76E6@~+1|ب *%KD,-mS9πzr))!Yx A^Z'rX{5W4"@d̡ "Db4c 04!8UU2$}jyex{EXo׌HOǸ.)GpX+\¾Vy ]\*]R! ;4۱y^]Vɍ2)Ɏn@HoG FB&&֖L$'$$Ι+%БRX#szrY}Hpt_!NoƩpTF_p2Zиb(a~~賏-W|778~6VO+Էy?эũ *jMz.W%L0Rc~~bPpKVYquۍĀ݌[XvEt7Om! T%5`35272=|R# |rIu\eIF[-%(%0heBUs 78򐸊.rNgTɻJ!2=x ̕j= ɠk< !n>R邲DqCe_:_P%G<{`Xs pr7TG{0Z8ظ~1K\4_5e`\f24nO)%}NQ``CF|G*>[Z^Z 8B(2y )f6!fS^{.>zc2qvBV3.?/ ?&gg 'JFy{ MA4,ӿAG hAc>yƓ #BJtJsΐ5E,B0Wmb22E\.aYBuG~1*M' C9)cʆAqUl_)? #]j\F,C5xڭCvÇtF:VW$wƓtب~\&kP.rwja…ߝi $*Mr1ce(t~Ws%m zl-Zn A&A€ʀYr,q\8-VbDn>?.;-jt6gӯw}J}FYx~yy v.m =SLY\-tj\}5ZHC1:Lka% pR3RD J3ne6,*$a8yQTֻl=7F$T9誑s\ww dzt,{}/9ǧh}~vP[ LB.3N N 04:rǩntb_0WF$k&ИgRhDb&L!5ġ8TOP*MDD"lqDd<Li;no82,w))J:"\}&Sz.*{Z,fwm=&ץlm5lt~kg}xÄ|B N^#Sp{u!D$m^P?~#[e֏v#Ac.^f9I7c7h:4͊[O{hy~WJFJ?3wyB*#GLǞ|{Ti۰CV=-iyHo߆HKDڡ mynCo,tg8uiqR?_nF!"#5F3@M֘`TT8xge=j;?#x#U>g2Yz'Hl$?A)%+$]6:=pDnG x4S6)g!Yʥ^H3N0$ETJYSB/"uP)l)FN惗6@&GP^X^VVnA؟ ` ÉoLBU%.h+]N ^'&p+D^ZeZ"ŪQ[S-<TRD<&3xIss>s\3F)ӎg .AU>؛Yܔ@n7;wv޽_/_5v22L)rȹT<ց1  0dLtS0km+GEs" 4av,ylӋ[űVl9}DIux,Vam}HtElæLCBĨniKϮJ'Q5TJ;#v7qGx6ׂ4HEm4`3% $|rL,0HPPEIH 0͂d]H P"daMP)N&%R q !JY% rt~<\ge8i'cAnPDN8!-L*Tt^[Z6(tWˆ p*e k-ˠu(q!4u`C]DE12[,8S8%5J鄲-K~DȣmҘ:iCquE7ℋw7mQ },P5--'ZJOX]3 ,p)pq<;iC;!? :&k6%ޕ7e+VԞAJ[WSx{M@1Eyи4u1&%>?(A'P)&K:%碅f UB! 7]U9zr[+4atwk'qO|wNV/Nrߔy{9XyΟ//eQК%kB9gMqQ+4URH9 HZM$ty/  UBr>(IL8*%ehh98.sΥ݀VcSJUrka=$i$@Bji-!$s@sCgFy#\ O8V~}ܞ6 ;dK>o_MXpјl6F9V UX{ Uj 0HHOu5Dg=O9[셞 y^T]Zk0AJ6fQ*rThP|^sP y O!6w=δ0pqVOGr E$*N`2_ dRRva__@F2֛b?zFHQ$/^BvK=@f!/lؽp!fTT)V3<$ VX|ԊN1V `-w[{-yLRG<]%uXW~Y.'g8$nD{ݹV |b6D=tPY2'>LÌw-5oVM1FNKn?u/FG=//73udE; '-JJеXcYtCɾ[ a &)M< O\l_Ӿ}N{6(^h_FV% 3Ȳ294 Ld9k62h*^3Adz]@i`e98Zt㡇H 兣Ǫ#.+Pj篾>3}܊Ѧq+L(.=Zj0pJq@SfrNWCaѪUMs !Q1IV:?)N2qTM+Zc(ڠxKh*J0:/N3!zʼn 8Ǩv޺wޠSUMN-(Lh;YY"ֻ:=NثtbQ!z4=،NejcL1pЭlMQBֈ1RֶܼUѡ ]>d㊌{c7 yut5.^,cMta2\%A~ܓ"-+d0i^{.6y%:ۛva29.؊zx:i3xTS~H74?D }+!R@k :8lɪBHLPhNLk{^X) MǴ_>ٻ{ˍO ti#4M $ hA̡(Z$R&Eb)؜bk.tzl=omg.'؜H;25c00`e<;"JC(P,QR(Qwe$`gQ} ;m,pHٙZ#XhZM `BkZSpR4x5wQy\A~$rEFTi7`RY\:8x`p 5'P !=!/PyNo9x [iZO$nn*ALΤM{.5"]1e26+#+ !&勉 &QJ bKr)8 8$)px! Y8[76]^_voolLD]m*M_ʁ)"&?2T 2 ` V %⎝7(h%w,}s?xnVi `.pwv2)4gXf-Ah}e\`rˊ!8|w}۞xߣ_??>P&vاqV7>qʾ V%~ 2\?y{l7^J!\q'&_KgQ1QtmʥlEfoy.mo~+ $ȥ*->?F`3Q ڴG+3GI->[() 衜Y$ʈd[ĉ%/\\qџoEG&^o~P"-ry.Q|B#`U?6Coc׮G^$_M[/} v?k0r_?~Y_Z?+q}uLyb\cOٺzGwVf2zU?Dg.W~\] l.fooGu=p\Qy}<_|~B]RlT`11 C a5r '%wR1+Z#*ZhriFUUUh圱h W dfe b k>sb @@'m cƀcRlIΛ93 _@k>na^[;^;˭:\UkwGTgxuAݛ7 :4Nbw'4 =t2ϗk8NV'Or& PWQCf(k8QETLL򉘤9Tc.bD #zڍg:)~{11PU%:+X-"%D,L:%$bv˯&߆%ϥmjggmNggT~;ocЪt>+n(K(_6;VKQX `B=O u2L2KѬ)FpQQljf*$RD x!k*x5 vYY&eSQ\Cjc][*T,YfWņ*ep!zP9|]6k1PW˜br7[4ZW+K(rxS'-JjKK^ `[W{w/MuW :qM`GS0N%R0IqMJrSWX0^+pp%£&>jҢ;\5)-Op ኼD`hઉXJ5O &5wmm[W?.n039g` @A]m"A*)Ekb[~ERU^.U1΅ЕB/t bNvz6t1 ~R3]F fDWϥ׆UG[bT^]9ܜ N CWkl h{rU0]ۍн=Z־3W7,biĚ. e.錶yx} &[:٢ q#N,v#SU$X?g*ӘoWfUaobockK Osv="3uuCd't =#PQlQ=0׳edg5?m}l [煇̔L$;gs.G'W7?ݞB|9joUZK4$m dHU&meY2jJoH_?t}ićD {*}{V|ǶAOp)Znp"fK[rMNVym)dd'i$0jg$TZʘ6eq1rѪi|HZpSNFi~e<)ѺZcPO^Z fI1AUp%&u+2$뒲h1^鱁O.{Xj͘\5U,NPR: JU$BK#c k1M)+Иd06:eZIPt$)SՇSºlh`@^mQ):$%DR-hônh&M :J0U`Sާ\1XUFҚw-"9s& L EՠAu1Az"iųa) |Req-[@!!X2hOBU |\&9 AU55ksHH%9ʂM/ dJyFk(|vl R"vUS5}GZGWHcVECPZ備&[b=I+y1뜵'Q\2c I&l.-6JR`%gE2 V@`&Y)dSH!ﴋhUoTddC`JR/ޅq*1ސ ," VTTPtB[Z CshR+oݤaD)R@!VMėy Ecȓcm&t9B\II٠+. -a5TCZf1őjJʞe]}`TN# (G!)MٰՆ*ZeD@!ҐK g#@PlQ<)0N-`&7SXI2 ̆eBGZO S`^Ou J+h#vl!ՠPwSA02P(SP|k|S[- kѿթJ_JAQh8(l,@@Hv'UP:o3V7_&M23u&njRFIC12DIJBL(yFt[> 4mKƪwFVy m|ka%U1^-*4‘ PӦ dW%ׁ˱7K*d`1uLA-ENp`CEE ʃZR I"92Xe(&Պ.%9G0Ara^F=ԭ}x3$nCf6⥻s*Yȩ ՏDE}ugD%ۊjLj&@D8X {WsA^LZ@mD%\uj З 7LEpjI7P@R@")(ʠvPJN%Lň Q y.!z b@ l+ jw ,u[L 6"3duCJ ENi6ZRQGբFh bL ;0-'Ej7,JȀ5ɡM!dsC<XػܠvzE,c5X:#SA )aI吐 HOOW,A7mŰll+O3 XWT!& urUpՎr Ryˈiè=P(ˢ#TɠHԜ0YYp`\tF$ a^'МڙZg$h\;5֤Y T)JKPKU=d$1rEI՜ZOk#kDo)T|7SAP  IcT9ڠb  &]`?[|[b ߬՜ct7OwQtCb;9!vsCb;9!vsCb;9!vsCb;9!vsCb;9!vsCb;9Ρu0sX|CPc:9dwo*A :lauبF6Q:lauبF6Q:lauبF6Q:lauبF6Q:lauبF6Q:lauبF1w>?'A9~ϾowS,׏3<E-W~ƋRH$'gDW쵞 ]udBW :JFR^agDW~3s6=])WIW[v{X.ͦhw(wHyl]R*#BW@wJU9]`)CWhw(w++\ 8_yb5x\ɧU zq\@ݮpD~f{^0}yrXn~G_4l&L& oĬF &)ܗ>)~ܙr7B0p~~woLԭC" ɤ04 \oPEp|<_۫XoJPY!w=r>#d8WWaڟvAk!6>8 t S-ȼx.1{ Պ#sGش]o(ȜH(Cp콊9kCSEcj>MmNm6Mmk Pj÷^cEㅴzN׋;`?׉\/h_/(I2]B*gEW+5BW-}j^~Dg/u|BQ(3+tカ}>?yqrrzo?o6uGC,먈hs/d9?~1o޽qRfZ{.!4 M_z<֐^()7mVJT媛ρ|r>jzX{٘&篛Gٲ~eh^eCYYBLxpډܱ@,pNo>@N2{MO2^Ĭ车%Q)}ag{ԥ. J1y@ىHjj頳:;iɼ3h2N|>ra>rlb?LJ 5rLE!&!iK^ƒ)6d,e򥕨 9xDNp"8&izw{ب?Mrc7!.ֽMo0gOD=2kœ1vOӡ2P#CtTr2%s1D:Df47 UHd1oͬe. _e NMݍu_ы F"|b 3'Db2+/<( Sb,}(e2 Ɍ{K0Vs(![wX||#ZRK%&Vy>Df׋yPt{F2=Þݒn{ҎtS4~B.SsAև]mk'?:WA"P TBE(e凴w>!˻aJr1"H!]Eo+E @7Ǣ4 SB(Sle,8e,0|0|۠<ƙG;)'1KXu td>2_E[OdSe,.VqkɁgcMPD0ҹWƝ=t3#Cp1.sSƳQi8mhQsRܱ_Kג_J5G<2xuV9i38Fņ(в۸*.(IgnbvN'33)b3ZO,Dr=z.2>&g?_- n+5qW$.ZS='X,TTLf]'}<_G äz Co??+[YWk׏-!ݒW܌T ^#5|ĚDH%ԟL߯ X nVr`g**"T VI''OZ꼳|<.0ꌺ|K1>$:3KQxe1$l qDXT"$Y(}^5Qoy>r\ #:*}``PqZЦ-{Z~\.[xNO]ӛ۫v釴<.ĕ˻1R/i&TW :Wd%@Sb aq۽"_m0m~=)w{ kN62]PW.83b\JWɇ/R;%fZ~p23yKǠ:Z0|9*O"$\G]TW=e bjNx~Zs!@,kmeMV(`{ ,2œc1 ~Z$i8] s|O%Wt C,k0fÁ5b)эur=v>l'UQ/m-? _fa#xtp3Tk[-1~7yKE3U2\AT֠9$^Xy1wM!QFPbv'.Bv!B,6+⹔-ɨ#Hm5BXv,LȾ$eh:iWY)0+}@Do9BvEhϴQ$IeDc.ygFd˽Ԁ1Go@18|8ES:Zb>El&vI:rJ^rs-9~drFIiȂ8DS:vmkbfK™{D y`ʗiX.1&2Z'0 g*d*M`]u-mP{Ωe31qBx;]۝{5Ly35Ư{җ'9j]zDHF",3 9&cQ&SR p,'k:pPhyNof'.O0lʅEdi"X#Yɲ1Ay` lV|z>$@lqlJ'asv[FA+vf.۲#ZeiC -z'o "dGa|LMelMu7o;<-3!b> r5hǎ Z/x}wZ6=n95o) 훶|~^΀s3ZcKjdF&0oQ`}'N`pM:|\|{;Kڭxr;'N @-(M$_Y1EesPABF: NY\ӼfKν@𜾳d؟~zfgTJXLX :nYvH#p.Jg܃Y c^?W4=Rm-j*E{XwcMn{eQ%m[*Z׶ ^V,=j7 Rw+j. }e?i-:)鍲.JJ+TcPvWo+Y#!-uk{gYWp3fZ=9_X Uef $l qgL4d"4FX(),F/|C/f1˒_>F'~s?K76WZ˞jÿdN>+ jy.q$Kl]0I;+BZ$8Q-׶_>{GwVK/-\?{wtBsBnz=otH~lHo|V'M2ea33FT&2ZP 3:U I!KAGDAz`*uљ87ΕbR!$rZ!t0rج=[~nYoǣ4o㽼vKbS63 ?m,k? n@fUSZi"2;gښq_I I8U[l:So&N#9f/o-2HvW. J`T#ڍn$v#;Qby-g 91 YLKLR: r }I8c)VÊ#k_J!bhSUΞuA6:uv '[`K u7ӯ]0meElm8N88gC~Sm.Эh4~77wn 7=5bq>==t^n`wt/%ƽn9wWMw/c_htZ?^ÜIcn#u^O #S.Lv`(C8 IE`ȨĨ5F}: Y3j+> B1Uc͜A@#qJuoFOhh?ב Aq(bx.jRHIs 2u"T2\)j@AWC-]N,TleVWt*HȀf.ΆM݆ZN灉5eSHMOM'^U_<ݮOH Ct*DX1@(Lb eK $4o4!`L+h#S'\}499Tod&ndtnq(bg, vS7dQZs}xu`՗#v&H;PRfH>`!8]*04'lWAlsl0:d|*N2\U"ٍ2ߣ'ƭvڶ3jGv'QP&E'kVEa&kE7ZG!E!g(KB5(NNG(b6$PMx8'$ Oǡ:#qDmh+TL^[Nj:`&ĢҵؾX %H,0rEо:Ґ Mm8\qpJ¤ $qY0ΈM݈]: .VEMKEꌋ4∋S\{k\QmXx>d. ]|bE%v,rv}}/* q[68 )3)*ٳ)*⢁s)ڤ=v٧mRsœJ9֝ \5iɟ:\^!\YJFtgWM\w6UO>c{^!\9p D`Pp6p5\I{pդt#\F"ն_`rѧڷ7G* Q! VѡE@Ԛx2_)O2kM]6Qw՚>Y^VY9b>En攙qzfS)D;݊qrg,*AđME7Ig2ġbZ t=;z/4Be3 c8:;Gz6V{~L29jYGg./[8 &5.ߤ=v囔u׸.-=_^>O#v fZV,& N+CxGbv6+Aw/$4arF*Uex3Kh8LJHWKN`j@Xk`G]!9at`uϗsw;=J#p7?>j|=$9b d5VGo9d] 7QaĤ{@C,|!(rzhދ0pz5 ׇŚS$F0y͐[VsѸA>*ԧ:浅2c0r^XTQAEc BJ ,fi] 63h.WԷ>" ]!@G:ej.YYd8byن]S8{ |WPVV$I4˗Ttzy{Zc?bK]v}D]J j\vmi'N SsʙYD:!Ԍou%HHR1*|k}d\EA3y]WwnY[dӱ mbEeSe9˶X霍h"hO!YlkkҺ#+S84T. +䡮"WVTT1iA>J4 rIŒ*6EPm; 贤<خ1̾ mlr A\2'r.:L9؀1xQ, l:N< شth[{KaZe1h\[ZG-Ex[neoz,ԁ52qhuhEBvA1eUwa~5F{\vc-d2=w原)l6RKMc0x@*1D3*H?z@gXO׭$ȀؐU6ྡྷvCɨ̜א#u T{#Xƈ|utYʵݙ$F;j"g?(zat!cHwݣ9W'u.lԦ6Clw`ey 3'lg:=-M#$S1iyA (ooK$DŽ4;GY5-!; [݈KWΈ%0Y u44\Ov5 7/ӋK!}Nҵھ\7B~PoSk,:VNk 1T mk`ӫ)׫jYm=XG+Fⷞ߼P/zu7eN]isLVjyD؍n$v'J"m!ŊW՜ń9T$=-P$gaő/%Gs 1zX*gOUC:T xE:ٓ-ˊR׽Z~ފ>fwt_;!6J:/D`HqxI-%8ξܣ_57 CeF6IߧޏT3:`vGJw`eC;;P$-#>>$ƨQ>Hڴl> 1Uc͜A@#qJuoFOh牜?q../n~"Aq(bx.jRHIs #?θJV*jAE(vbէb+%S(FB,0vH8u6n6v:LL/ =>7F ;~=y۬E6J =x=_m|zV}v>"&6&lYdBT=u1 mU&HƲ%v"cjQDA&)|j>J`QNZ[ U@72vgIkrQ"E18Qr!\mVIHu;ڹ8pNWi0 "vӏCuFD7"∈4Vv.ev%*c6!%xDZ((A*e"g#n\,ՑLhjÙ((FS&mPh'byvFnF/|Qpq ,:nZr(.Rg\G\ʍ[rh덏Erȿ` sqd(-B,x \<K;C|:F_6B7'NDm}R_>^M>I`g_+1 O4q9Ӥut\iok?IJVwFz.S OyK ]܆k~iU~raO.4/U,aXro]߲øm _V]7HN֧}_4_>ƛE.>7 |^|7 a7U0}PoD|4K؜Yz?pXϗ.# GM4P<\YWWz(ٜܡd+q : }Nc?d%`^ 0U>} uQ4^'1BMe]ko#7+?.|bE7/{l"[5GH2VUlN*` DeQ71IEK*@aHH9(MI\ Rd004&0&~#@9]L͌|ܣk5GTN= >1c^Vx:DظdX]3ᶈKHHջw7\lGZH*-o|bG_VY;^^EH_?)%#nX&RӅܗ= 3qi nБ"D[OZ+owrHAfD!%!%AfM=o,͐HF$ꁣh[ckȝ7SUM)Cg+;i5s9rkkr tZ./b7 FDHiJMtk5Dڦ|]YH)ft (ek\䢠.h,j]y4Bz4k2֪VcV\ F+-U}:!HUBB r.Y=mY%/D4=׆CPIa&ZPZ"]3X`}~Hc=i~9J7@rΈ1J'>"i#Zh"^ڙb̞ـtvT} 2,J0$!qH2II(kH>GKiؔ8{V\i"3O.wL˝'k2:1MNQf%'|Dk1Rٌ|J p92k.Tat=#eK'u˦,u hr9Aa lThG"H!iЏgBC^Pp/8-,C{ H3=׼X#UojΥ+}sp).YJ[/Y/Я2M{CVҔ~r8 Q8 oP(\*zEzvELcƱ :땩=4vx[q%"NݜgH9Yk1eo~x;1m9쐞_#g(A! SFk$-ɸ;ɢ1Nzh׊\42ǝ\j4.}4`w=Luzv> Jr2sX[7T3oL/GݶĻ~17?'}}6vyޭ#-Sv {`uQhwK6zs0~Kbϫ;7S{eMKz-µekƵLLH-TLIJ4ͧ_W\544=WG[D׮J*Y|Ek=Aihyiٺ*0Т*0Pi)52_z-W(H6` F_ťjs>X<'?כ7XMbo3g l~`O?O>u3Z/Zԫ`s:XҎ:ckŞS/3@)![ A/ WI_MJWc+n\ug+!uHZ)LH 7WB)cCpE4"]"-Ii 9V ,:WE\-WEZm+ hn\;`WU֊hzzp:WE`vU_r*>j8 {zp53H\qWEZP{7 WVZ̻dfn:-$=EY-85Y㔁Ɵi`q.Ɠ|f֥5TijOHzq.Ҽ ۃ|cHo,KiTuT,H>o3"+6zuqQÇۜ4b#~ FgVz fVI)}ʺ+u9`NYf\m ZY6Fn*6Be~}ZL7dPa lEwntWGϥn?"co~X&7bsӔG)'uqV_]L3իQ?,?uy:4ӷ{v~v. ,|Q]L'd+_{ç4ìq$-s}ܾbn_1W+s}ܾbܒ[F疛M;W T I%!VI *h,8&yS2.ʖ[^#k#n&]1fxrEV0ǼPˆd2ΥR TDћFtm$/x`]XiC,ǔ^gH9n՘80i&a-jul(Xdh6eb[}({FwꊷҡDeBGm6ydgZdýTs9Fu p.%. H6wd:=y;I$5|4hYu1qוהh@*  e訒"N {uAsydh MJbt'ՊEtH)>p̎~$/< c- f| L:lIHUyUvIy^ *zk)L09h5)>leAhqXiMXtl&Mֵ7$ub[%J@0#xnVv}R8dcC<7Ʃ+KfT:$'yLq.9fX޾Cx~f#vf9y`#ZfgAQ32 LG<7;zii8v+Q_K..M<gkbnO?.F0z/ҧht4bnlK.X.\mdwMbn~OnǛӿAa[Mo~%ms+& '\R8xҭso7OvNڮe=)4<7[a?%" A`F"A Y$wek%yf =ŞFy$4m`wpUȪj= )64gu 4u)_ e d$#1K x~/0=[|4]ka*q6dnmPkKfMΩtd<n֌ޘo}{ltr1R;' EBgPu?{Fs2\K#d\?18L'9:#`."TւS^ZU d E0@NUJF:L:To]}̃FWr“ejjZn_zyj~ӴvߌV0d1Y+tK!뗡7[z=#&Ԓ=7'A#zV 1!1)ybe%D-Sy-ЇZH?.ѣޢ&1P^?>l0̳?y=7^xĸb] -]L9kPH rp:K%r"+x,Wxrz^f+QC,O ثܤ[ĵ};=;eܢ#7^Rwdž3yi:ZU`{:djaV}-*K0+^?6NWFFEZ}s=9n=N٬vcp /''>َ~F"GJ=Uv2O\^ PB(43 D% d0ҚO2O^"P`ainJ{.H`F8 Y+o#$ڊ} nvG\O!oȳzKR\Y" t:CG:֪QyՒf0"L(HANd-LD]Y"Zsd"D8-b^~Ft3vA>Ԧ_Rngmgb?ᯩm. \arl"%%x(CL AHctm+\DKi2d eo(K2Gʕ$RlTDh] 8-x#zvE+Fo >^n? ʠJZ**N%T #c؍:v^CaW911lIMK  Do\ )-Ds('\t"HR"uI2l:x-Fef ts_kK.^\(kw=]-p,쪰{k}Vftw}}3Q:{)ڴWn$_+߲J`۷Nuǭ篭yp^MtjѸG[vܲ[hnw =5"{@k-wCͶ{<;~#tO A=xQTs˦CϽؒ´|%]_Kst=5mt7κLr;WO2?28TYAkۿEēBuֳeBDa"ỳ頋ol;ߝ1ܜ=10e*Ġ}zzw޽ܻۜLeei;_û4kcοgEޮ{ ~;Zk!͗gK:Αo{yI4E!L$ъ.V:`N M8@5EFnG\*Z*L[SQ s@( R8Wi4XF,<(ދFVf )-7A./8b'~ۡHb T7cl\xޮ1**ljd!ZH|¾GQ3xp1b7g?b뫴b j7ӎifDxr2Cf IJ0E2d]ڑQA Y/AbONKm>(:#=dfM2X`)rd82FS1I;Ctfީ_nW`<DlOED툈#"x8KYپJZOI[d̈1Z JւoI!,ـX:!Rq&-l%ؓVn'2eQy6FfG?>d~Ցqq=fZT\Ƹ#.7o,W>fY5ՊOsIA'-* 2{rdv,qx4s+x*x螆I' w(~s!!k))tYx@#u;Y%zdzIcLW. ^>"=Ơ} &')(2drJY1MBe#b$1s+aVEHiXI>./cexv>4~z?C0)8{97?+LA,DY*IgJVNdQ3t4JpDPckŀOA$mKbȨE1^`rJ\@Y0D4M`G8و?sNF}À2+kw{뱃[/V]Z dvt !yM0)B,ۛdW$:L`D#I \Of=[gD_.Fngav"_ eNH2a!U:hrElvt!yb8=}M5-5ؠ K%es9N*/Q)tt޻&9)^9#1RyMxRSSԑk N8ꁣTӭ6[yP4ObHՄRǂ4H B9؁[QmԪC&#ODD Y$ [bA~ H5>/m f:@٢zf?7M3_Km)t#|~^\z.Sgk}O?>OZpvY yR_36^2+4KV,fbcfgDzOUpTnlE_諲:%3 zP{_Rhz?c{}>ȫafY #E}9=wg?/׾ /jyl^?3\M?t6O*n c}8cο=cm<Jԛ'bwWo$횇-c-k\fop6gBsjp"ek9b gk&Aٔ?p'U 0Rs)B$vՂwlq}tV e=΄“7TH8eeY*%䁃[Sp,&۔ȴ8&$/Rk3 MiF}a1Ϸ{=[tJ>!&]!"^!\%Ԅ\c"+W99)㩘|MBGSffz:hH<])‚NB<DզZݟtt={b}g _tJ[%1vK:6I%AVJ9҅#YI/F~Q_jj4DWb#S$*%z'' %PT&R8+mH72Ra $>me{CR")DQCi( 55SSOc 4lILq&(L9;m9sJ0YNX&ws!it cTBf!Ŕ=I2t2M g>A 1+dQFl^b@NbT3MuJ+xڱgm t#o~ Ժxf$d J/]hp1S\`n2FGKL/T?ͻѧr9m dqyKG^\4O; 9F e]G+A~~;TQWʤTZvGb H \s86ǣ@wr>OOǣGcE /vjXԊ`w?/?^fpl!tΎM*qYW୩0 V@1Y ~a.d n*6R|veMb6ȼݓ99ӣvv>0p8#ܮKPfZ|e?n[XѯxqƠ!M.4}w6\PJOf{T0h(gl}.KdzUVg*h+UI%+wq"\-eLClRzeL\Z2KQxF! $G-E(2 P޷j6rĀʡUc:s/TvU2ښ8ی3===[Ф/۷/2p7mcex@(L/o]0Eeӯ36Ikmɖ/;Oa )@> Lʁ"]?_Ij1bz2,zV`Fя\!@$/F r  x9ңݓڜ(Gkg$Kr\ ^3_Ɉ_A k-M&C*Zyӂ(G}\Kz_25U2zeU KbS̲Jg%:L&8 >zFdTkx^f `0vz3Mu E.l)%"J.Ңz.R>A P&\q J*b R,+C WE`)&\qAJ**z*Rjهg`K`4pu/Q'WW=jE4^RRV{+ΪL|ZG*(c<>|x_ύ+pٹ96Hl[y7uCX9756`@PIkz(RJg`-"HZ.:Z pÕ9pEƃ WE\dHJ/3\q|ۛ73_bH^b]5/GG!jTgW?DTĞ;.\ek\gK㹄ɧ3N&VLM{͏#J*AV>˚! ڸi%YdW ҋAp-VesƔ0_jײ2Fd\?g|йCg9Y?ʧ@ε6̃7:i!jI*G+pY{rƔ ޸.qQV)dn}|3שxx?yɑ- LїQd;Yz.=7!s˝{n{ypgdC9gPYJj^&gcTw@F%eU?s,;i2^{q "CWO$։nb}ރ5q޵ aD*ݕ1'rDAY.cd5gQfaJjpRcVY{h>wOl^S15.D{D)PPlmd!=s+0V׮xGkwȖCEЋII 蠕TGGHEC‹bܾR :V̥0 Y0ZUAh@[`}Hk?D$ˋatN9gSHY$se"V@3>O{qs@ϼ `|٩@ei  , !8OShxQ*/:PńZfk}cFS!m ddEC5yDrIKU<wz`'?U(g'r'1$sfTj.D#}̄s\yTQP9:grp).YJ[֛9[c Jd!̍@Zi28Fń(aq(.Tv#=#}LSFl|2Nzg3o$DrBs!it cTBf!Ŕ=I2t2?] gxBH!fE,(MKLг S̗&<9Q~T oTBk6LvY8..7/\Q(h}eHx.^f'+TՇIha_F\XPym B@Oݰ6n\/͏_&\37׃4qyoWnb[9C4E A2Z#hIYNF|:g C˸VT96b&Tӄ4V@f ;[+kbk .4F.dSo^b}w=7~wOr4N6kvqޥ#/ZSn'zcɝPQ=r#jI+yд[ Ft/ 1 tW&fֲlGb %;". lְ|V8 pnoK{&9X9]TAzgT_3MB;ӤstZL{X4K*mT0Ar:XD,@yJN*'#W̽*:#8PځV ZJpkܽ<ʘ qY*< i~֖޾qf4Y>tr:f}. ahwտ}2[Ds+R޾" `f2KFл4}9t Iz%GYP8e+ҪoBy.UyPEީ)p<0#"bOExLz rjʼn]O#BJUO)7X Y X["# +6{l+nE2mp۰͑͹zebf;6ھ\a(z>xN߇)=u*2foLGN]X[6sA|ntE'Ǯ2t)oSST ~n:[ I"V ԗDx<};4 c)i퇟rq-qgRIUҶ1[CQJ8<G8Gg3Z9 V3N 6s0vaL5T>r N(uv*uMaImtY`Fu.jgLY)!2E1"݀\֚8wWgIQ"i$P,tKaI9{El0KN:#XԷǧwO@Ew$EBF7:iJ%(Ŕs5.69M6 urTLĜ2Ba1£0bO@ɇ- 鲊8oU +)~Uc s=LN&b0Z{ xtQqRm(69#UPF92=ٔuʚt|_{[l@[W1ͫuA," B_P)a[XƩ+Ki,֩'t!t*EkUGㆹ2yНBG9y`jw<˲/!O[5(ctsC{&v` ~Ew^_2 TS?BȆÛ'ޯ/O#ʶaE6>c>#zdWoFG=/Tٰ gQ" $5 (Zy&k懋 [N`x,C#lLDvo)[Six6# v\gW- H 2$'+/4g'btfkdϦwʹq'g,uػ6%W nÀ&`zM.pZM IQJŗLc5:}j*sZYNLj!9\"b<b9uttf`O1}5Y΃DrԖF7Æ=-km*?n|ҏ&z -V/ #nȚЧ˭oɆ=&#nOrKGIGTznL7=, N~CRG& t6HxJ&JUC)/ ?$E{^' ;#evF|I~`$+3Nj`r:ۼ-,tKs,yKR\+ȵ}hǽW2nс+)6dHwl;{f=3p: /h:~%jj!αp)zwlO<7ɤ,koZ mg§$B!2sMx2Yۤ @ 1P ༷T eV'2QYL/&ΖZ>(FAy ۟Nл] Uh"GT^ЁULE R1$&ڢ 1B H6LgI@x4͸^kbBaC-&fCq;A1tM3sKɃdՍzyռB]sy'@5"tuaV;Q'Ma"?Wz܆d+|RQTޜHD{<1j.1wSÈh <0i6 X EQ\)yA:\LL24",:&, QQR)\B*"V1R iƮX( cpXx'Quf+''skLn' p\>m O/ˆ5w;KuDpXVO!&:%~O8"67< 졷K \頄W"jfN!|$&,N$hSh\CAbڱ+j¨-{:ztȔĿ6RJ>Ƞ Pndc"'+S#(& H*CFfW8hĚfBIFЩ\a<,&fokSQ&+R )1bE-ClT[ϊ-P+FH.28գ[],'*$xp I'AhF: 5¢\ba?8Om$w,u3_8A}ڭ/lfy?^)SI`:;F tGgDpfH J>^2#^!rƇ /NR]AWQӀ$DG#<bq&Z%l!#~g\3%O|8lYnyo v`tVHgǖᅁS }B (7JJ&Rm,:0'N%2Y7fQ,7DXk. F\Y6Ë~_30T=-H-ò{fɆ>ߝj|qәrV5ܳH]lWZmV :6q;z;&6*]s.td)KH <"'0"AK8w\EA7l1G =O,'^SxAag B0E`R£ ,$Q½Df8P$Xf$FyAe 鬞p"O Ʈ@UG&}|O"܊Jj­PiFÀw25./GqŨWǻwԬh1k,jOkdU.uoUmjd8UZ9 >ڃu IC. ݩlfoh.w\"w Ǻ9(t"}kэq1g}@ ୍ B2*Z驎7ɕ+Wyi2w.zqr>>Ddh`O0.ܤxoᷳQ#vn~~> WYm;_vOKzg`\_s~~;o+% m㜯^l{'}9{㎝zq԰!5dtr%,kɤ;m'y4>fnƵ yt5Z:2c8Rk9#dtPý7$FLQ wVTYq Xq6y>LN5c2(ШR g/Yǣ4?wM\cOXWRO!ds_m>;g%h4W|-k):BU]cgK D9 MUфS!Y z=^$&7:'%3xC-@rR!Db:Q"u 9wVz=Sia1qtS+^:_Zg2n޾ru\+Z'KW_1 #:?)uݶ.b +GQ&c;}}9Yf=ϵ\7[[6<:yvos-9)]?Usvc'XmBCZֻK]g͵L&h4uF?n^7VjV;FmájFz!Rg:= o̚H|ͮ!6(0i6TWW[ϊ]LKҺ-w߽; @EE :1=3N:MQn5x"oTDxZZ ^1r I=sKJ@`-'N:1Ԩrc*hp4P>im9sh$pw*cqFḀ 9#ilL(Eyӄ0R^>]z_RHRp6D.ՊXR*02I%ZTiC 0Y4 I9'\Ҁٻ6r$W}ٙ- $ Ge{N/-Sj۽1}ERdQ)Цr-UQx|)kEtr%Àa^ jѧA2.H<$QBFU+ʌXb@P)["S.='֘_cN]5{tW:|1\AsA'0HHO-Dz2<} MLV3Yfy.QL3s T{4K9*tQ c9qI( I~<],O!7}|pƖkm}"ERM&H;XUubc2) PpqGhg]Ge -ph`hפ dd'È7^r`Ek͸ 1J4J8m $[Vtcumpe2g_ȶO Enuq;-Cpyx7?[\yN'Њf 7v0^>3ۑt-ҷ N=촹w`xOy6?[x`gFN% 3Bֱ294#muz_ٯ"تzL,بvK_M:“UO NԈ?y2tՏxVZW?6ߚ͛?.jfre+ִ}8?+t5Uτ*ӎA@4BZ}ktRZwكOt*J_\s |*L \ױz1T˜+_&B-R>;d+@\tmPo캅}覌ǛA+X<6>EyMr^s#2¥O7fw[7?_|o_m/wABf!>mۮG9>qbp~?? nۍ|m}~юk7BZ?Һa9\4 ¹K|n'0 F C QK* # Q|EuWN.nq%O)kAcC2>1_*Vh[o{9U#?MzBJDJwa.ޗKs5yZ.kkpiMdŬ 2޺Z rOl|\98m]WwNC̫g Nxa4ݒtQhMo^G>+`˛(?D2n5F%]S;bn efy~wWTnqjdQJB וEa}x3_Ѿwhn|;7HMr]wXI:6 -O{51ɟ]/?E_]KB74ULވjPoR-hNn8>bdzJ_~xR3^6hBߨY7vn\1$gSg?~|ANN'0K^{s8뿾dbг]?_φֵ;KVl/-x~f=`~5OqW~!x=j?xm&[,nIi;%#;dŎEw\O}> aUZޔ߭ȮFBprLXJ@bcmA *[&KcFz03%חݐv<#~![W3k9CȟNHẏUmo}t7{˧l]2/h{R;^m|>2%U/3_Oݽj}ho5~}5aY܈ܦo2c!/Cp>:ޅطuC]08쀥1ERƢbc)d7'4!OE9jlX+!d=9k*rȑ8J%N!{}>H/'RP{5}Nl"r. BJt.C"T2\)ת582]"Dpէb+ym%`liXÌ͑Hqꬨݦ[Q뫱F`it ,OP's",Oݷ m}䓪AjIΫ5 lc QdF@X+L &i˖ؙPD=2 p8ɷΡdnJv ޖtv[~X-&Bl qGgُ]kx M*_0g׋gn3$q'SXHbu0tE&e>b6eZ'Ѯ2kf8JLgۉ}A$*wݦbNjy1Otjmgm'=!؇$>1D Ȝ>M NVPPEۋ"aHloLBl%Bd(%BbkrQ299bsY% %ݦASP:[D7Y">$񌶲KEEzBuEDT5b%H,a-r|u&L]%s)I$QNȢ%lMgEȭbיKMJ.d'po+ʹk _Rs0#MO̶H`idaOG'8rg{x`#O1Js|( n~6ُZ3=Qt57ò JE*6 0!Ɔ L<=uB{S_{Ԗ׃aAHE*r`vbXg1r1Z)"%DPRv:t5:.5 -R^k=E[ {3tHtzB9`(rVe[ŶA|~UE#o|TUVn>xH*_?Ps7eDhft\xS)YQ]FD_`"P!8v5I^ ֲHJjޡw5K)qS+11}\T +A.ؒIk.D^bE.›zm:{٨W>5[w~@PХ}Tx`HSN:<:1;b1++#"]G#KD,'[Kp!iAƨB8gSF$zaziaMK_`XR:RY 2JDiZY+%.$0gx'&{_w*>ښA#Nq\^C]o΅wSrErhKXdkFkM␛lT,TV!bپʖw@v'i{hHV nfFE[,7ٹw_r٧ݣS&貏t9CJ,(b@Bhm HBq .BL$9sO:Ƶ"NeK`mlePfXmm|]0ZӠg ޥ`F@6!:&}MS>[0G>bo<H>xDhcN'WvۤϜuሾ_K9,盵uԬ0jI?AmXo{w!$g:knϺ&_^&fx;W$Mh7I%'-goT':'OvPtPtO K*6*I yX=DYBt --\E8F &3G!U#xPڃN vZ9\.?'nYl8Ab>;[[gKW&pV͝~:٧ ǫ«H [GYFOa< 29ow^#%OSCnǓ/cLM$=o¾כCkA,ٺ_|Q˳>34M }?^f*߼4dR{v3cU~Z/OP&{˓˫0Tq4s2,eN}5繠lDS^50{=M{f}f:άYK~ Q&DJyʷ]&IR Oed\3d `#ww Y|ndF {FXp)Ҩ5?9 ƣ~! 2*>0fчRgvMkZNwiSLKq_peÚk_k8gJ~nj7|xo@&$ʖsV4A[6qPx7;Or31HRRXWJch]$ XBv:87Nx嘒LJ-K USm8a2n< _:V*~ t[?e f6( *"LsFE方`8Je>GpϓW$[KJw$EX* %HҋLlehtTI Asܣf2ggmAkV1X fJj 8f:0Ef cgaGYEn<:U!%Gy\u5g1e=LN&`F LNZ-1(<-Ǒ֔uH&)K_PSm;zwe܁3Rg \כewQ,WJUȍ#;Wd}CvJFros?#[$_// ;W]S{Mi.L\QWYsp\/a* sQ15z~#V`~)+1˴m y٠ќ={9ꝌO$@︁,uLiNʣ`%\@81R(J`3#V\AAB)ӻgһ>^v^X:S?lre~WUE,˹?s1e\Nz=5YBPzڅilRV'ڛ'|{{ЏRc1_o $D9bY746 ĀN̹IyrVAkP}7m%j>4DK>l⫬`rs9?(N )Kn1&e4NЙ t`w5ljrЂB>ܼ&zc; ؾuC3/4|Ԙ͝Lv*ߡX|?6sFz?y=/û:b=p::oq~~So? j3riOQF*=mm#y`DlJ[JeA{w&]'z9!J }<祧y?[EB&oHwEg%hM. `D'1;BNNZG@)*AGtXco$W;O!56@S Q!9WH6͚Ϥj")%6n)>-cg3?T#w#\}]PulKˮO?sBÇ4^4*~fA>#^=bǍ  ]d8\\ iބp3uٽvHCDvHϫA1piY :H+yeF[)y6r£+wdé3p©=Jh~1o}ÕIJz,B4{7P޼uC I:0i-a^ ;CRP{ƭDfvD q' &eL 1B'ʁj oY?OY"buj;!)ZRWBRWTl2e\\K2Z^RB_',t9Wff޾5 ZS߄J̄ W5zGWꬣ144)"Y&ЃtDz㞉8D#NSUlR)ie-w)E.18@"\}&+S ]&WΑE&_ں;nݗ1OomG#n272ۦ `/[/{ҬŹwhqN͝/m]?4 rꆮn\̯{MWw opl7+YoozŇo +GɕCU~3 m;q?z WF]Bl<~|Q cVt5߽]/j|){zW\ys臭.H=ez]ьRE)IJz~&.;NmaPR urAHcDЈ!K9(Z.$ ,.=eMw.eݑpSs=ft3TAsDcD&p덠?Zȃ)fs쵑uo3N0kBRJYSL^ RIK`Gh vZ9, 5'X&~gΜm$/NֲUض\dۻ6!eGH݆f sJhI7q(\QTUcK+34GlID!EQR5D#\4,%ppfpkl]muUez#}+>XDȴ?.r΅T6qi3IA$&+q3YiZŁ2!C&LtMLF #(:1fY R^Rm8aK-bF>NՈFԽF5oAeC);o&ռԶ0 s}F()%90,DjDg9cq@r2$Y$!Is# YY#Va.ѫFҋeR|VɩzT֋׋^ˍ):qstLЯRA-Mh6x\V%"Rzz3̹8UОϠ*w_6/~LbRw4FEאg4 Mhq̍}O(:ȟQ#vJ:_$+q p C1 Bȑ" ` s&̸G8/@vdQR< m/cx .۟}][~ͮfr~$Ly&X (J41(A@ \1C[`F8F(ky`gQ d.D] x:xH4,h1k7<1b!F0)ciKV81R(N)䎸rE&H=pDePm8G$|kٻ6%/xo> $ x?,)Y"R[=[ԱV_;u)D__.fniJɶM{0&0!H.9J$FAw),[i\TaUpg5+mU8OkF,1)itk9Xk0Rh8*Ya}^W80f..+V7v=X^W=ĮRI2g:P> (]= `,2y_ćn1)ט|'AZdܭ}f6g6q]fQ6V{5Ԗ@hАsD 6cE,K 2ۺ4e'-ȉlO[d'-DȤi A^rVrg f=; =,uJ;" \rGCFrϢ̼jWI4QXeED\P$1 [*fuM/Zf0ﵑ:g{lBKk d7' 4I+_)``A\ZrKeTfPgXsڒ 9xdDQ⨢8zuW'v<㕏x8w,b Y\2t%>(2TȑcerYAze#B]NSSbIh{sMtbYH)H$qqѓbw9%F/"bi50FģxXSV]rwJ.MY1n/LCv*J {PZ QN CɁMl *ZizPBkkh֙& Tj5ED7@aN9gcbL95$qpWPyi{fnt>HeҚ,J4qPm 8C4I$d|$'!GKNBmTZ.WڴO[=ȞmY'K2:_Qlr2;ȯu -988q!>OSFg#SL_t 37x ߞt&&KAh$O]+h#4  *X #H!TE4y ke(b4.bgY;\A($rBL=gϑˣZ\һ6⒥$ega2M7!̵0RV0Q8 nP \a TR .MǶl9;mIC#KD,'Ss!q cTBf!5){{d鱅d<+}aKa t 1+dQFl^=+LN1_r$Zqr_[sx&l</]Nuwt,);E!0e/ZEo(@^8 ?*'0&qMQ6W6Y}X2C[:١^Yꖁ[k΁2A}}7C"0RNiDC@zpdb"Ι+%x2|*e[6ش l .m8cu8ws]{F.Oc} чwi~Y n!o6)OŒ+^CsVقnXu;C p|Q2D{ZJa2eU!vZn(xz9"n}\NWO'W>`-ib5%5gӫ*A> ytMRJһyV)aRT |X DLv,**I Ai,B)R&#}c r}*MREg}* :ph QϠ="8Y}KW;cOڬM3A^Ƚ֘x5gsZb\Ni> P\k]N5;i9@=AUsUADK}jPIBXb*U dZ,0Ɯ "*TJq!rq,+uf!# 9I^'\V#~J=GbduGYSg?Nά38\nbbZL-Xg{tBopmh:ЧN%XgLTqGP`TҒ}> Hg#{XHKǘ Y*)&J3Rq JcU!V9i ((ʼnmb,} *d&H^sp9vo(g6{ndZ#{H ZpV9, 7ddNㇳz0ƹn-eqa`po(1M1 SgHQdDjM0F#pL"os> /ƆnQ??X i& 8u) Ձ oicQn\M?+@HٍahHN6^ gBOɪ8@aM)b\dL XJ\{(Y[MY+)GNq҉39_mi=j;NIvY^)yrcoϪꋓENJzn}ĔJlLeD*I!dA[Ex/ަ݇Fg}=2x= ܂o@T\ q{2dEԍS4,P4-95V7& -Rs|DtZo?/kX Dx8ǏZ ә+Vvܱ G*C,/A: Pע=a^N^׵g8aOE{ 7}a |WÝ.O0niKgqcW*xM#y`ٔ&PZjOuO*+iUs!JN~ä6?y9\޼~ToT]뛝;=chqK D^ÓFZ|iOQ  ||&ysڿ3ũ98} .́\Ł]L!ޛ09)^ Wy ?^L/ߎ[V[&XYРB ̛gN@bp,3<y;>2!y5KεI+}W'Ӳ/ ;iΈ4_݂MMF4ޥ4ZƂe6i:[~UߍCpsr|b1oݪ3w~ԥ~M1wjP&bkkVlvL,ETi]N[(U./ei^&.mmGb^LJIKERtMQ_1ΊD诌~_{)P3%EO4d.Fg߽8Z^ɇRqug s_2I],yFF_4]Կ]٨c3(ůFqJb:*@etK*cD[RPNObNh^,V>adWҴ@GߗĘ\3;^&)\+G؋E9?Ȯ{,|@)MW5Jvkvmj%QR迦B|S+b/0R(zs^m+?tpngútn +-sun*=TfWWMQHi,;y5+j&WyQZ:{F}> >{c{#bv`+kٷibJ7{[rsjM5odcM,`NjO Ó}B )၎in068mNe|5>Ov㳳gy+C(m/i[_̶j!EV-*mBöVEø`!bU!QjPMUuՕ5pAbU!\*Z軺"*-A]u[.~W3yvu`ά#WUxuJХJk'ʢZ/!.Mg͔U WISioK: WX +J) lm۶s~l5$r5gՆAa@7&~oH e۾a /XgDoss`{.bOU*LS3 FK ^ T+jwVD5@72݃@~uE- ]]* TW4^Ũ+"98J]j黺*T" + RW:J]rŨ+PsuUl +e/G]娫BeZ!{ *s:QWrۈ\q5 /4ˡ90sUjY*dq fzm,`QԍSJH:S{ +rq9V+ԚJ! ӛA_uhz`񭪀ʲyj}XSXgPZX/(Sb+yj;(T*; 7AZ]ұ"؈ˉrċ Z]]*queZ.ys*{,ُsۏs(؏JRWfueuuciZ\[{4>xung-bMw5}X5M&X~?Xu;wK IqJJrpMEX9xc'k^w1 Uk"Fzy?G/.zj4v>< ;S9zŽW/4kyE׾ =ЍG_TeWmmD׏xyέ*۩Ovxvv]v߫-~Ok{_Wm_s沟m̸bg&=zbrR̆Em,EɁW1F Qu<(Ev222ag3-7KULpS=ryozXvo  y>rl PY*P;U֐1}&ro&=jϽE}^nngmkbJ=qJ租T|vE)frjw4nTnm6`v6zHh^.;2oi5MLݫz ݩNmN^R5]* },wu,@eV|%ICW5Ty#2(qoe|cn׌S;\e02Iε6Kot`ɚ5xE9tB.%. MBхDJ:&!C#9 9ZrF]<*]צ} 7=zDz=ݓ+PlrB:!;ȯu-gtG, F 特|J,>#sd ^0/-7x5oU9}08s'a=Նؙ9 PQC9g398 /n)ʎxR*s#Pfo-L(f1! gXm\*AzqA{I&wA#LF쌷tY"d9sebTb^2&A饅i4CiKay~D2MS)Ij/Yt9x89L߉߫xGA]ΙwP1&;9?m qlm{ u7u ꦫOWڢ-۫dw+#ի2AR9M'@FD)DH=8 2(c"Ι+Ke\6 h ǝ`mleP g66Q3LDZ>] zէiruGlBL5yumZw7M?'M&;0Mq[p#6 HB{ ɑ5]]Xy*fx;m־sT'z&2jz'Uo1㊥q*;݆uimw'G;zq27,Aq8s)wƪoʜ_>|_L)8D-CoxR8kOXNF 2 VNxn/yx'A.22Q%EN+_vo-G"8w^3vح>AtH)>p.=/< SS1KW1}cZö D7hˎ&E΋/4 }+1hC,W ƩK\ TOZ1d$}7Hnqa|ud )mcbKtxk\) ~,)D9+wN"rȂRz&ϲ -xJ32 LG<7;@5s?\; 8_ 'ܘ] =+ǷEfc⠐W?_oTYsҍcώވCR&5ZKj5YKV+50Z $>y4g.^B/y=nEMn5|:*MI4 YlQDAޞ%z}>qbn©<I.Fsn±u}_''າ>.F QCXjs7_J@f746lѼK=v{^OI"'%=>figL.I@ŠcRI 2 [eviuwm]Y$W>4>zby:km߼-X&\^>єo@}#"3!TykS^B8Fq)ګe;xw4-wLƽ U ܙȓ* d۔.},$",Fk29rYBW3?(&uGκJ|T:SGSM麟tcwmn-;V6J1]ڽ[MԆ7MV?AC.os]ۛMw4f;^6HW6teӍֽZ~xw]|-\}ED]u="n*I"J(%;Q8tNs#JJIHn-تh3 oHn6\F"M2472aI eyVFjirQ{"r:CqTEb[}#JĴN\Ya}9 &RZ68l@TK)z\<.\ktX0<<U >cw-~|GJH8#bƳ,<bIVJ{ D=^,%%eubH3&1 Hg=7P6Z\Z5ey(Bs\qcsSԏIw"Y $O; >20dsBDf[^uCQk@PR}L.h3V "9cd:0t.3[y9{ƤO -޷)Xl1?~ϔlq],*\lQVvy_{l{*=010991(\1C{ΓE`="X)QA袍RGMd$*e&"x` DeQ"aE#}bBywSbjَein}k-vO6[T%/xųA;4L$ 48*9c-90)KO6+U 6xx2=lydJF~d4& G NRiD@{)MHc{䩂<0,I!~cp\1rWRFoagh& dtRI,ȱ'  :Y_m?lu꓾ņ9/A eԋHQm6pǸR /yF'a\;m`($w`y\ xPIO Neg͹x5rFzG6BWM;4~'7I[\ܴ+kֽ!L¶ ~Ьam|0[GJKM-˺5OkqS@%52,W*+'=Om2~R84[:fJ&tRCՕ|p;Hz&ui W=R5q۵X u:&]]kU MxwX|5\~+ЭՂ|Ԁ C74.Ӭ5Zjg-H)}SVbnr ykM &mɿܧ^Dv -?uHBμCz؉UʵpD"*hM -xkfe$>VI8~#D6ѮgҬDˌWiKA\a[i05 ^`Fו+1t`- g.cy4:.L}{Yܟ~u4VMgzcq u nIߖk͟rfmŶgZoĴRi+7O.ogeM7~I#uUC(@Doau`~.ZS)d|Et/oq'R\H\֥ȲRdcm3d `{n1dὡhd=SFt7zډqzwKOi lPl0ZG)q01*+=0rb߶nw sa')$}5;.CKӇ4i_݅+]w|ztf;k<ĐTtqc,Vr=zgukkjɪg*hhRjsfgy<gyC{g;6vΕ-R޳|RqBQ 2XI}6%.`ϥEVS/qQ\)җ8%.lUXZWVmz*Rn%5 /:jN1[; :]2v!ݱk;Cx>?o0ӭG(Ea0|*,to7x9wϑc#y` 5Cqp,",*ڑ)٬\!e,JugreϏ~ , }.pUvpZizpeU6Sm9bTWZuJ+'&ٿ3%kՋeV+|\aW?zKa/[E6K:ܺɴ Kz5|Qq^l OT&x}>}8 { *Qx|;?n&~Kv;?Uh57gZ8B%;V?_.`r}X"G5E*$eǻgH2Ք({bRSSzhV\%v4Ot/h~>^ѩE~TUߔ3縩{QD |Uq_kħ/=Q ˻=YA՘:A*G${s^i3={~WflA=2 eN"i[\<>L d)mǛ䙳s>*jlRhp]3?ÄHY݄Y3ŀ[7_& &Ԡv`ފ&jU"&)~.;M_.;4]V)Mvca(!XR5' HJaR9iդ;CUC)`"3ҔU'I7.1qLH*Iyٯ}i{JgXGgdH$GMn%&.N/[O-ss/>$G781:5N)\T)(ii΁ie.^eD N C_ JT )lbc *J5r?UAzI9IJ&%HEb2`tRH* wGV`ʑs4R6+J%nPyK+cC6{z t.1daGD7>8W΋ʣe2vN2,zOhF!`N &gKQ@VHuqO\*T'FG EAd$0h*RqE$YHUubsx]5d^@ 9| _ঽLOW_',wq썹'2Z=AȅE) "4ͨQEMVꕤ/dK8N7!ϷQ>m1Z㳾㚒4uxܲ*TZ1mNR(6]rCU) $avc M73QbЩԫRiMt.fй| Nd#0褔#`U7屘Ytz6'H ()JHό`RdGQT&i0Lw:єwYf߁f3׏&Pו  ܅)(Ep`R$R fҁ9^xB{B|ċi~=?=<{٠7w-j_ȏ1@ȧʍu{$4kR N̹m8QJIy<_4)Lzl03 .2L w&;>Dy;l%٪-Vy.uLGlNx3"{=`1dR@Xt\lbILs7wZ"U3.w[]i~ՙ+ wgܷYZ0]v5+JN8Mɸ-]F2L_(-,[0h^b᯷dU.~yHfmo|V=?Tēcc^FFTm>" >!lI;x"3jXjڎwȭmK{.bzRgggBƑ5b*@'9%/\(dcCU; zDb eHG8rA4&g2'; +9Yg=Бp3|,x8XAsz$)큗.w[oB' 0ŔMYHצn\3N0$ETJ][B/"ĈmR0X;9m8mp .7o ,(|r/"m߫|c~}o<ޱEdǴA[ }>"ē6K:ܞFdd-nO?|V9 J59uHmB8)7[ $(Uj#"ti)K.z0HOMAdE24':8W3TmdFL*հgP “bEE3޲+՝l7W,}E.hh4y?_8b'##֤ٝV9GBsJd q(U[Z{d2&j.LG&c8hleĮF~qZ11qDZ*Q`wFY ӊ9&ȥJxx&/ D rTV@2i$Q!IȈ 6jR )"hُQ)x.X?ED]u="x *[Q`)L *I:'gRa1Y *":6F ɀ(2,iH1%-NC* ˳2"V#,7H:.Κp2.{\i6ȕ. 1A+i>AK9y,j#袵 ɰSXεX<'țǚ}]#z6K~p柳-z͡NQqfAib㝵ցgi0,QyУ8e0ײgF8=m%S5}NsP?ȓl^QJ;s%҆SBArb E0GR 3Z"3ngd"#](Sk)a؝9#~/Yw+(L"/Unh-vYԱBmxnU+xDV2 MT&n#L kA聧L7#ϗdtgF_nZnxƋ-Nol"o?0-:.v'l~$ysw/m#; _~1ⶰIo7:v\-e@:ti[tݬ_qbtAisW vjnõncbf691 Snu A´[-wtl<3kNK-Dm$-Ҫ[&hq~ޓbp7]ňe-ulb{48ggJPҿ9 @hYBk-Q2|Aa3s6 +Js x:-^oo'o,Τm] C1<~ca3% ՘x?jo0mpҐemˆFJRq>,>(L! ЬQɎCfsP72]M=fNg[}KKJdu!TB~M2uNNjP`VswoKI'K^ynGѤi s,iEHتK/w$(H/>}m~Oޏ_M@6{Ojo{3h;펿VmQ6,}5'Ҵ1cN/e2p[F[8mZ(7\"NWLSQB/דl<9'$o,{S|10jēTFZʈs2$QVBT$\4,[%N|VxW_('tUK@YtgѹMNn]Mݴ~ӹt-ӡz9XwjA7eI[LkZtJxցKz? =eU]::V6E/_ &;Rowe@;aصL/7 ?"izS0Q"فZI! MA4,ӿAGExE(_[k?{Y OK#NF#4+[K𭠚Jh2 ̄'HM51&>;b$@1Rim(jD1& H[X0Q(D},*_Y'tQ#{&hGNgM_kDrK͎>͎'1aRMRQ\1 aW8GsƵqvƃkψ/1Tw z0|98yL?Sx9R)Y"RBW.IRaP M.(w6gG渨`Xc”/'>+@^,[^»wݮݍEguT;}5_T9yNzm[7H%y.ֹ]vq˞WZnևJ|=Bwyfەy.:pZQWbM5_6ݾOvnk_dh?O'F=s9xn֜٧n\V.DB|ʽ՝SphCYKrV9p*apU F;Oө55OA)u"@p6.V05Y "$L!XJ..hs2dXk J9t@NK8itQ];ĺ2UҦ E]6]VFHS$$],%Y(6@ldiA&5%Tf0`1PoG+Ѻ`ۍfSZY9K_K (T>;HQB 5gG;0e ZK"-TdTLQQ,Jc\U$XXgIF[ m}RhPZՐ>Cz?g%x {Q.DklJ222D9x&1vP+F4kvsQPVx5 [+Z\BK 2NdoFq *=d3xbzH8=TχN]*)4IX# ](uMQ ѫ;",ۗ|wЌo(){R+@%gZ"{+IFhg`:lLCju eZHgĸI7;$rKI$j2 a 6OV+66Ys|y|F(hb3O"l`9EP'IƖ\J>\b"\z8bܕj#C 0zn~w;0ІW9I9KDx⠖l`Ced2x! IBI$5*m%Ύ8鐶Xջ=>xꑾ󨍒PHk 42`TH0 &'t/rw Ŷ_ug_5ӝe{ywXMQ9L]"8DW@P<0LljS-^f={b'y}ts=ؽ .eBbr: +&)&VJMHJ$GFj(FߋOM4_bSSJ@Ԯ0Y\LMBp"0 5':i8l6r6^!)el= ,Qd.J!\:t:ֻyaKc~S1 ȒNZ`A FѕLe" ˑ7!C+F'H|S,Π|gO; T&LRךC)TEz$X&I ˕I%0v.+n>K7yc~lpNmJoHvYB*6 eT1}ᛪ{_n!B%ADY뙲h&;&HmiXo.i,Ԧ}{s+kx_n: ֈYwhbV tOԪ;z~:]~;MIС BKٕH`&93E E  Z\#d*jD-!rR3)%E-H2$aE 1RX#E&>,C&b>-2(maH@`R[h mui|L <`ՋŦfd]^G{ @?W՝p^\O#.-%Fͣe>w??<,޻rPgB5l }XLYuOR@Wd2* j|/o":uvHt J =9c  l\>F:eh`GFgg%i +W*3CwWU<* (ڹ+Xy0fJwW,z; w| {У'G+S_N "]NZ^O7.t%pj݌7/Qg[Z)& A:$wXsZ`3+Y}n1/_-en~n2G+w(~Eɵ/`3hF$zM(U D-Ү$3Vރ;S}_\O$v]KxnÇ;+?4#3Ϛ?y* UX "&] 2GJYF-A O*PeY!9/0OٓԿܕZ c}/izR?(Rƭ۝ƒ-?C\gP^뇧$h'|izy{=.e7+"]'k8CȚe KQ`d!;P] t ƬymQ7:Eӳ[YhW=؎w47G|2F~1r|z16m-gsb%g"+f%B(N7XfN#zZͭ(ꮞ.nj1C@J/IcDJ!z$.˔׻|ą'͎>͎'1Pk %Ύ䊑tlUuy[;ogKls"eJStSNb^%4[:a0=Fpt6_vp0g,P櫴ZlJrp<GgN19g'<|7TyM: n`tCeJٝ w_( xL9j@ `J%KqN{@FIl fL|dM4 TZ>r` 䢥)SջHV'aDE`4ٱeRې2'dq|qyzvlЭ>їɼb{mf}Jrmt:动9k5($V*B $r4HHC%|S Cf"|Fe(>`)eFUL zv|y~>+k2<;XzYl^)= bFj=rƏfjɻ쩋li9RT. #sQ8xS6& 52< {$UpMqcR5Eyთs uHB@ *"%tE$낲qg3qvdsXdvt.N貒o={x>]{|7ה$6]6^gȋevK׋Ux/f}a)]ݼs3]tveηt.7E/\ ze})-}9~[nnStiޡ+mJj ssJsHg1?g|l@m#I0! VWw m6ɗCT˂e!%{A{DjD=`˜aOLSOuWWYe_̍zDjRsPSBv5ݍ!!"dP $FSQ:b,q!dم9K"U\uKq [==&F\}9lqvñ֔ Ćgr& X6 V&C(vrD.l:73TBaD0l')]C-S ad#R &.56fotR͙#wO]Iح"ګr/+l3ɻ݆##6l7m )*v)%Y C4"DἷJ^Il'&v eNM8[:";C%j.>(J,e,PZ#c3qGv\6ӌcc!NXXxc52㓻E8Nیƻy8?_~ɓc^YO RDl)g̐E"rd3ϝ*[fT 8$%K@I6f g܍~X3Oo_>j ŇY|_9HL&#{ S,İ* y`8$A'5Dr6\]Bτ<_I56gH&f̑2rh"S|tQB6H'iJLc,s: w44VibZ]>UrY'ߞU>ٔ8aX-rVme0'%SMvK*[H1n|Mޚي9ԯ(_g\,߿IWf^ߗ4YݔT7R-VkrO}1LG<۟CSAW |I/X Ztko[-ҧm3rVn!\! F~%ܐKtj͐ѽ}mc/=;KLƶ_g"P>{ {PMv"o)j[{Ao􃤻oGKj{\`q1<]v3.s2E=az.=hwqQK܀Q7m<Էn/Bm􌫭~Ը3Qz3 nݰ 7 d] `׍(4nDЀo+\BgyuPEQb7h4=T3RfO{h7P_.._dсO$_Nw peh{ۻvj0>'35 Yod۬ΗO[[x;ߛB㏝>]/0xn̽ڬl)=+٫RZvm{ gJ6|9,(_rQO.OK6÷?=N\yU$xKE'%w3Dwvֹ\UT|xm}Tgb:$cwYԛ 8O>|X[cWL`Æ-]w*{;1 n3E6e;zf?l3Sq faojqiyjqy5ҁzxMj=|;SoySO!tߢQs ʢtuk`ukGWFYQQj8tT̜,cEhMmc4|0@1Rj %8tj3 m2ZDMMƑK3BjX^_SZ.o&΀d#(}|tؿg zSyhkػÏāаu wT@CQ;D -3 &GaW%ZͦʨcPWC%TC0X&mf#Ȃ';;aR")H֔T[z2R] s/$Z@+U|Y JPV*iVdWK)B00>9jgL$ Y3qYgWf[r:u AE<8H(SN2"<Fc62y,ó F|Z ʸ,L&>I# p+*(W7eDufOEjȹv6s!~c Qtq>ISteH(I Rb %RglQBP0ZN3!zىp4Qe"4Ӊ#3q&S{+P`UY FLāR':= ٫trE/z4ś،oyp & 3 k#(N"P߼ԆѢ})<ڙ_3 9:TcP$:[g)ME/ _ɗD͒%$LqϔQ%Z^%1%'ǧߞ, 41)H)Bf5D Rp!eb.K`zt|`%$x1"M\ ?]A-Xt+轲C`bS+K(]I&]dH_ܐPz;E~<[\'c1amVJ]쉘r#$%TEJbh2/mLgu݌L#LK+6PE3!*+IE+zц-b͢+)X  qp$ S$[6:G3rnt]|ZyUKapX/iWtfȎ^i~kN,Bt9yyϋ2r/#[eZܹख़k%/& ޿{ik ]k*Y;f..G۪wU_7գcsNݿd# H)mq)ϘmXj`4TΪ{Cܬկ]qM5+x{ q*=w(p?QgzݾnLu=l}tҽ.Vd(K~@k, 9U$ \E t yS Js1c@c kBU 6y-.57v"ܚ;|ڙuCׇ:-| <ڬM!$fWQij^7$.^^PURݖg4G]s3d#UU}"*FNUب*PգdiOU}9Eҵt銍 4RmO3/ ryi,,zw_;hr,$Ŗ]֥dʖD\32qx0:5Q*G9s)P F% YfE@U> &ύj3sSUFBi?2 Ӧ]M-<3~=l3f\>$=|%iyex0jsC<. ا&t[B ,֊ժgX^p{gf\$QMPRh"Kus9^#jИ(sZx)2x25ۧBzYt2&2K64Xa6 J0Ң!WxS9k;d_Nv[gGĎ jWlEZy*u{]$RJEzuʜ.G| bD2?y n&ݢ>X嶚n ơWk?ET^OC* f0uBJ;=b2D);QB}*r֔#-P% 6^͖I 瀯-Y5P+^"S5W:9ۧYts8aqyY3Oɗ/==U~V h{AQB*)0Pb3¿r1dLMhvcw&λ`oodIkq7e5etVOރK-q=`7oh%{{24ЏrGgXƎ~,68fFZ5tu6+ѥ"5aR%/@?H9eNz6 T(}TK$Umk)PRU6b~V -mHiPH ۥ:3& <.l̜sNnt1{ 3ZTǯՓy3BWfr2i, CB!&:ɺ$CX'YzDޱE{μkYu+Gzw lPn׮]An6weȇrhQG!J[| VQ]O?U c#}$ 8{ձ~>6{okDd# D ֣8a"d>#I5>/c cv<ۺ+AdP?JӁDJ7U dMZS7UGoZeoB+J^VpUE{*pUppV=\2+6X8q*pU=4u}>\ WNkw/޵0NGL](}5W3;<l<{~?~sZ/*[q6Pg9>~-bRw!]v#wN "r Ohd'3~ӊܩ5>[VOH9qBpkT'W\:_uG?;VB^=B0 Fʳi..\}DWi; yRhc L'W\WZWl=\Kh7pU_Qx|u/ιsۋ9taE]!ztZ݁ >N;(8~5Y5BkGxUTr9 R6Ȑhphb.:EWBg4>9lٕLٛn&^FrM2̺3!?_R(. hwvIΪ0=2a ?*kafOwMUǐ,z1ЁɵTV Jbނ}Cm07`h:ܧN3n!آAk &'PQȀK{߶s6XJs uH5媓Y"RrT.Ia0iC9;F/&Ow `kG]5~~F[[w#O7^ˮuS͚7d1niz1 z&?tqlg#76OycfG[JV?6Rc;>{vn~q[;gsE4g -znzߴ| bW&G7惡]㙔 IEi=)mI2Dks*JQZ LAS™ {@q1cFYJ٫(cNBJ 63g,+IDՙoَ3QYXYMcD4="[bM*d܁5RYK}(#j $bbY )":+03:ZZEv2$m$,gcDlfvD1e1u6%"5Eqō*1YS.dYW+f>h%yl6좵YXj!phsޱ/xh@XXSaW~Lwb݉ q};ݧn|Fow$dz%)(ZY(CjJpD}yэ  'I=^FvLZ% FZIJHD$A&H2)Q "*']" 9%X{LjVŃQ5flF`FJoksO6>c{6ڋ l͚txxoB40 1oJ)6IL,I7Tʀ)zi^M}|'i0O WL2\tG1LLkpĠx=JIB]RуjJͬ:2g\~gVFS|=?tǙ)1}Oݔo]$"k|bתf{<g^;C>QClFjM̮(k0^Eg0u?=[/8=j]Ժ۳uK޵ u/8!zLƶg5!yMNꝚMֵmM^P6:A8|] v7hq]XͬO>"ٻ6r$W|ޞ"Y`pXb-p_xN&s~֋%Geq&nY!O}:tͰ̱1^@? $e)}nRkM=MEʾZ)v?5ywmm%ܻ^h^a$E͏64s!fs3ﻋBRNz\rU<qw]8Ԣ*\غ:ƞvjk{B9$ .]07d =9-3efҷP+2MYTjo&>zǓY.əPEQNEOC>HSj_I $x0sOlr>qH>_}vr 'U]|Y½ݓOlrojٞtPىsaλl,uK&wΝoV#ZLWN/܊ϕ}jm'9RQ`L@s(>{YQ+h1\QmWI8dut>4P㥁6ʅhaK'DiD)QV*:zVf$I)@$@&qk2ˇnon#Ѩo+!"~z:R^a8:h.z)#?uܯ>dlQ;nP ]O%0vha*] Wz@Cd!icD~8) N-Ф1PC"'\|b(KFPR}!fh^Դjewadԏ!O.j Htr#B=\i26?~^tDx=-+A|zMVgWrܭۺm{va#:ҲXrpq+ܲn{W~~mPf;S~yWNq[Q˟fߞe\$_O|G]gMtWXBss;|ɟbMvM {>^?M7N]֦^Ϯ<,_$x2bȟ^_\Ԁ8%yTlh@Ѓ] ~8njsL=$=3\S&ZaRԊ?^Iw6`Q' b%ԃ]Lvƹ*Sv:z P?=X+FyvE벚y܃wgWW7eϮcֿNk Cy,,}U?pb~37wof_PH&ЕBb㻬S*H!O@!qR B($B*CYZdC$SI4V0K4+5WIR&U$m⥫4f"0cbBB:jƣ7NK6ڨҺZg+sѴ@ nT9.p?avy30V= c"mČxLx 9^eOSHYz.@"3C; Fu<<V~zJ_|o|k>(ŌX0XEv6y"# (JFy B,nYηU lѐc2^4%ʪd N' C8Ol=̜eC'A墰{4x{-o|,}ϹDh|ϵR*rwC5(:d15!2TJg# qY)Vrg+ xX7XFm,ȃG?{}H ﭫ DΠ+U?pMbŴRŵqxe j2޲tN(6F o©~̜7[)qc$IVyȓ;ls4Q8IJY*h BFE K:A$&Ǡ#CA%YTRF*_C039Nt:PU|-W13"l,wUi_|(׬ ^E*%ygfD ,E2Fȣ 璈1=`W߆zȱޡ{zX|!>OYkyJx$5d0&'Dp_@@jE Z @қHW/iJB]zk|9JZ\G{ 9O]Q иb/ƾI7|lyr)BR.1y zGqib .Fd_T.~Մ4Im-AJZ~pXk0A6)IYjgwdD$9n_CZo6=eEAIe/чXQ9'F/C'`B\ws" PA/oU^^˽tL 2aJDKbButd,{D(]=y\hm=Uwi^cY>?tH'Ф2@>&B* XK `FtIDޖנDY4BAq$PA- Xt+ KpN(ٿ1IoEmEEԨ.~tw^>Of>:p mt)W7r$bLV2a P0•RR)\B fGgzig<,3 chdI%%\VAQhx-BM-1X Lāa$M8?H:1S~ME3-GK# ˼ooQXHDu'.I{2``\WY C,~*wW_ -ڈ>cc;`>] h,5 09YUXMm{P@kLi*{y)_?մ=fc\>RD 4|Z'.+(D; : ^w=^%q܉YYRP'E([+:f\Ya)ʓǑ'-PD1*I (6Dj.Bƀ*HJ-v˾mC/9~!yWg_ZN5z~T[ȩ[inb bjmOPN˴X_{[M1Yt6cg\ ,valGY2!]@O@0/@Qt4ڜNJ >`!'HKH=7$JUWU9E_Ynoik;Xk!\쵙`a$l-AH ^DN9@$T6:dcݍLL mU$sH Vh/Gsb[5lfNF]T%z,㍓~HU}b?cv1ծ)>RcD02r3qWch_%^%N~!lTE( S3ըFQ ed'bxlx.ϏJ{$2ͯ/d)} {F -{rKMocvjڲ89 A`Kg)19iJ ufx>TF_BNU4))ǻGWޝރNO!5H!n{#9!k(Xy#& BizuIvѐ֤2`,w8;|jybN#dAqG Q5cgL)`R`NGQ\" &:go]@1()g"!F|29Ih<̜j"~8|;SL`qlC&tQf ,@E AifBGnvRgCS)dC ^qQXHI:ktF:{."ɜJ+<z_w01 -T|AR3mAlavl/:/i~5TcT$M1`.NegP)clͰ'm pi,ǚm<霆LowXIj3S?ܻw wg) ӟR׷~uÿTu& ̞ra씭I3oM\JopqFEŘBiV)KVIL<=J\K{8_0}Y3cX%mzcu̵#)rɳ=4Zw f&Z\m씍dK/V"w޾%lf];rEK\7X s } +EZb[ e.DxWC@V^ u7f*0d~T||ɷ~?@2d].N_>GS٣~؟O\zw}ɯxt8Rqltc?%xmlRI' p6pkoppl?zܤ΍!=O\ygN13&?6k7 _]ͫRn[:-U GiGm((n 233qO˚]Ǯފ?_iJyOg.r8/s9OW =dЛvĺ]Mgtq]H,ڜu|hv[#m}i{spׯihuਸ਼?.åLO(uw5R ={M^ؚu]f["]9޹Z5hrʤC5PbJU6SՔj8UMݫRyG0ؚKV75Z`&[2ZZj j 9ZE/\qQ5m5m:MٸhѴh鱉\߽{4OfrwRw,S5}z]4 &S)5zH-tO= :`F7vvl\\D\L5ёBE{))wkឌ$W#Y VW*A]:}OSi7_YRѦśXnj,@ɘ3>|jŇlzj35Ժg9PJJAur=꜌%x~F:>ÒxXMѵ5r',"8: AOhkc0mB\ҒdK5rkOih],HU.hC2+הOtOVi!Ku6XXeݠq %=5 }m.5뎺0Qɣ^f|Ph L.r`i,FzX{5`Qn<)p-aݸ)9G`CQYh0QJ4Cˮ@b ı&6VW6@PE`0.4-a44&cwɌ+7@r \. * A2;bl:TZ{*0F,d@ %ջV(VV6!J=R( [5p\ jX&EȆv2}h=%.o V첌޶X@e>ಮ Z `!S l0KE1 q04Ǝ \oP#_JJ vЙj>XƯr`VdcD!VZ!hTCLW~R`(q s7AE[xGd)$h~G_!HQW.{,tj 3*-Si LyQtāf`h5."2Hb۽B6ф=j+P*Кe$M'2ЙyDnU+f䥪*>Pe 7cDͫ2B9} C<,؋r1I[W F=FgFwm|AZI[oP\#X YǬd 6Rt%3K:Te ,#<9?,{/Z4XH茼p Z Tx" D&jZ%T^S0  .K}, |\N[[FãP܎m@}× /dAX?Q "rU#H5YK!VI;> 9wփ^,s  Q/AwAK|̡(#L϶sE/S!vVz*D-hg[R;V@r]2r ;, "/98zlGj314 3@ 25;sjc:G' a¢4,J 1#tjXR]@mDRa5{-7`3, Nj$k4ʬAugH@)J]Xzr j oQ@ Q#7 ~+fգJPE`j, qmGiCМ\֣sy^#rv0?.Neڳvn(Z :{4qu&f=)[ZHQB`wM>2JfY:ޚ53ZS ΓFɃk6Ӽ@(&|op|EiFĪØÁvàDluP5\Q͡ xy܈!]tPʕT*!H`!H HČ*=<rw`o٪704v b#V"4զSkNnY!G߁Q +U C JyP3dQƈRnQCB; z@U hUH?xڨlJ%cT- i=7nl`fELm@24֢U 6j`%q?ZDd2bgj)օ|:Q痝 v6WƕA5^5! D @(|\tժ Àe҂ӆ3^Lf ipf PH՞Kܡw{C`hΦri3] J.YydKH20ĦyIJK4\- !Jk|6T~B:+yG58Dc56Bxԏ['^ W/pN WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\SX u"+P4]j-+Dw NϜbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J Wpe vM+Hc2w@[7\I!1\HAJ WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\P_jlĴ1\!VcARp(Y W/pHÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbZpýo~yGcGx덋z޻V. HgxmӖsr\?9в70\cLau j}EpWW#h{Q(p*w6 (ȏox﯎_֞7Gʯ~A9rcM'/7|B?e3 ]KEbO3Q/qzSߺ7O".Z?3࣭W.?F|WgoNN[_X_np7z%穛f&N3M%ayV'' Ϋ^hl8>QQqʜ#:}L;ɾaO;oԓ>v5YkD))ڧahLC^}I+R4c}ĩ(jl+DIy&\X\ӖVW#\kwWGe<W2iEpA'WW##ZVajDiWW]S,o=*`2_z5q>qѺ/W2~~\ivxMpY׮F;\(mJ8{SoYFެFW&p5$Wvc]\!`ox5p5µq-p5彇W/(vEpEڄW#ZW#}iHS:(Z\zGѬm4jjDIF%U4.?ICզwM5x[G$UJla~H][vt:LV5Q.v4OEu*HާcWWT?WYhNqϗX?&MUՑ;E37uuSr㰔׷~uÿ#'<=]^:m:OA0QKuk2jvMv\7 FGen]O~a:<7'1SrCx̃1wO{xo_w~xUN_n=]mqb̎VijVٻq)I#4{ImKO[34V~ ([h dn͖ml4FFIySʅ[oWnLkoN4UdBV4.ڕ&wo }8TQ!Vwr[`k3BПWGB5׻J|'v{cUMm[x1,&Ul0.{`^7oOGb;" 0 ,oPګ\SYG'%Rܥ0 TDj3ґ*xgHr80O[0:Ш |cp[RyC$`hkM\'v bd \f[9[(Ai]Ѓ,i g}.Iqo?6Aodu^K^^EB)X-!C.kȵtB&iE`j 5^P6ZqԵCf Tus-bm~?][?3 {:-_e'pK׉[1K XvE'v,HOŀT/Ty'֩7򫃪-۷}vuV{: =- )8] 6rjt]-}}]'EYj;]/xc,a b.nLE%8҂JFByYH:*< ca"byVr|ؒsqr <jqPvfv9y#!LGIqWuB*Jc*ΉմR \v[Sczk,^[~n{nN1tM)gҡLjPQDe9xxSsrD1-CEu #H ʙsgu.D!Ra:11`Hu;=sdД A9, ">iS2(Ч0ykCI¼9)lb- =.[/jrDHPyb*:4Uy?U'DiMJIPqR˳*ttLhQQ핔NrrE2,RA)aQ8 8)"[DcZ~蘆v.uy[y=bа6ˁ3_t^v8$e|%*X dʊy sy5PT.j[<Gŀq=n,̠m3^zNs<c#Q0t9,)*Y˴AІHBb94o!Wd0BLnvWrc>Hy++ +Rmgf'Ww*nŮO99[yG{* S~gv)2r3tH#Ҕv`y"bIιjq?-LP>W%i QW^khϠ,LKE88y-KqtvωB.ۯ-©/`\=wn[l&-Fl8kJ̼PȾ,}% Ӳ@VkM{T: 7B Twߢ4wO7ڋ 8SN8e85T9qZWTiY e`v}Uμ E)IdP drSԊh94"Pt`Us}jwD9<5Ca׃=6Y8W}M}Ļ9b&b1QT3.W$Y2J]F9|D/4sN{() @ 4Ƃ[V[=)ZPR%T[:~;5x1RW޻կ:򊝈5<+dFR:#%Q--ZŃV9ʇ*.c^(NTC9j!H'(ЛL~1s TamO,C)D2B'[ȅBPjcp 9#DB0*KlBY#T~g1vz"&C@[7?Tn29o]q| {{\Ż*3{@M3;4{t3 '=_4u^o&/>P'~OS uպq:gDm#5AlYa˪dgwOo"zPn<.u=sG8.1K[ !Xj,iovJ;Ttd/ ;L5#w%wSjds+#FT<C.A[E2Wu I(#!Y֠(C$j@2>. |DQ&)E/};5,|uΰCȯ"'{PIY}|ˆ^Z}]RY-9WuDYJ *QB %ZFUpȂ8 P4=kd 6,@pH-Rsm!/l ` /j vsV_gQk}7_Aet\}p; 06z#mSdUȢi8 e4M&I5A%vh>&r# '.lsb, Xbq! T煗:^P j%5 +fyٸ$i2zDϢB["A."^6 /X#ʖ.i{XMEl"s-,l`xL5IKiimQBdŃ.P"AM}QG+k)jB∍ L XPb̡.(AG$M9N. EY"cE|&r:IɹvQj]<TF"eaqx⟌$S1˂RQe!,% Rq=ԅ>^ ~*X6яj()2|7YO]}<岞fn9vzjp僵ы1Y hEV*+ΔѺҁ2cX5c9Q7+sٴ:Ate xǩ&:<)@h9⹢ɦUa IΓYmӉXl,QWvǏXԩ6E6}/˞B$zeHU@ZrGUe# !rpF5*Q Jpek2 |0 ?a`* n|K8x&G#Ixn ;&%Dkm$RJ5 xb҇UL$@$WjR TNFNg4Y†;=꜡6Xw]ȟa푶c~Vlq:I7G9DA:3{6Snx(ZΠ%tAz! (mĈbJN@S0N3XfyDe$<*2$$ƫ pŗPuR<]~I aaxZ/r=WD,j"A*n.D$Q,fR3 >4ߡ]ʺZtl_㐝SL)*&4|,/0YFI&IG_&) M'#Q1'Q1^0XT ɓ6aω#>Y x ^D]4m$$h-p$-p Y(-ww=~Ur3oY/7cu&Fg0FX'4_r_EXq~5_|./v ^ua-bL~:y5ϑ[[{OY|YlPɗm )4F.W[|ِlneDQet qV}7a`k#^D-N;z ݧ {u<aڌ;)[5lFvwF1ۗ=-;:"k5v~E[ܷwB0eѣz~έM"t PB] 4&b*隐zKs^MA+MIMA+͍Q8,G36NSN fOiWe-.WXb}014?^ d| kG^Yh@Dօ\x8Zt7340KM*)َԫJNc[b&Sކb$˵4z~YgCŲSv]moG+~ `..~"WgTHʒHIhrd l gfomLWN/zn6r͛'L`f3%'rRTڡFRI0A(C?ce+mzgx}"DD1&%-a]qr2fXk02΅LVPwz8!ivq&tc_& ёS ŅUGLs#Q*=z'b#N8toF0]&ux»MWъxsV [xm]u . uυqƮbpKMp")ƒRd9ƽs`>lbפQп6*f"DQJv1 0?lls]hqspykn% g>Ugbu6W%EtRUߦNn }f6y=z7_"]U%ґ='.,фInV\h=H뉐4 !-df|V4lZ34l$߶A7@Jf~zokO쟇"Z:.y>2pȐɋ,5cRFV` a ThcfVt#@֥Ȳ^gmmF2f(`{n1! 5$.Yk<㳴`ZX>Z0FzPM|޴dNX^\c`: Y^'ӣjb\S;M:i&3) Ia*BDt'6bLd9s ZƕkivnkW0ôo{L,etDﺚ<)eg[,]ZtzO,&[z%?f8M>6c\}~63RyG[#FS;\d`3J`L9&}lidibrB#͢\f𨴌1b:7 E,`d vNȍ `7ɪ̽3ʡSV ZAnم_mrk=EaPa--_zv/0s0bz&^;Uq„@ZǫdX~燕 P1j\u2uPFkDBTN*T+l{K[ƬZ$k/2lMe\]am[:ŗa&ɷ%/CM1CoY Cˑ v):hBYI7l"$-Fd4(5Bv֚8;YcW3[aGQ N" \̢RjG%#Ǭ(=yIP@("&Ц |wO#A>^Č4)p0U9"2g{'mBK޸ ō>IjjY'yxMqC *00 . @F1tRזfsoKYNF)jtbOj 2C,LjKRfW"S+0$JWV2i9zԄ^Y[!.&.\J4Ӯ_k*5<(Lm=(F zSbx>AByg+0VǮXGku@qbZkpg~!n[V,G@ `jD8Y$&,6! *6Y]s-_RO9b'H3 rP|+@MǓ l =a%(?~+raRk좑L1 ɚN| E;k"8꣗n*6/,Ae>((/js,bIt[FZ|M"$maxY&'ɠ4s2)p;){( F RI|Jh,>`skNbZUn%#P1a 96BcY(5sMBR hT3uBϵF.4JsC~~h7!@qBcB%hQDs:Rܲ^DG/v#deZZЙP#̢CN8PIf75x8M{c1!iof\Dr2=_¥5c۴֖- ^;[NVFy * a0GՊR T#i9]jEJ=C ;&wUhk;wUuwUTwW]!H ;EwUUXUv~/I\+ @*㙷'qx dHwW])eVxܕB4*ڣYf$i9]wWEJg鮬Pz*b6a#FgϫooUM1t:Xy9Q֛kSԸaks(& ~?V nĨO&'ͽJL3dF*˘4pdV BI)D~O90QiLʃp掗Ee_ mjoːمw˹gyg=Ι5˒_~l3GmZM I”E H:S+并F(㷿L+B|=_aW|=_aWS|=_a=_aW|1|=_aW|=_aW F+:*OTyq<*"|g'yUZu~]9xzl֥_o_5]4/?56PnUmǩG. x}:iGFnB-I҅VP8)ñc"ΙK Zƕk۲mFRAߕj n/oZeb(OMP*uҡ˷iT辂/ɭdL'Umwof&ۆ~>6GFі)V#Neo彰{{醳ymzEw%p{LAdN]W'l\7zTz-ʖFt2S%֡^t9T?l򡕢Cp^gmMruɖiكGǎC*G*`K!R1c,*ef) J#&p:XDF.c섍ܘ@ C}`F9tJAB+-/c& ܡu8ؓD|q֞k6kaqޥ9L_٬go_N0Kէ~&cҮlR:na:(5"O!*WgLH'-cV-Zε iHk0-qFŇvKR#Ie#(<]6C߄ӻTqJH v-^Ʋd^叄 ͐Mc gLSm]Ւ|Xo1GT;u]5F' k, pn!yjm'}RBGDSlŤBb9'_C XDjT-֡~LKF9AVWoۨK+s4M :x/U/&ٰg?i.=!@&K)77j:ǂQɨSJ 6ơ>]mq.p;'/M>"NSoK#^k 'J!2"cAYT%첮 I“h\\&*RIWKZ&xnhLm"@Qܲ.8Ym;fdnpLWˁjl['e!1T\.;&QJȭ':#`m+z52=0#sV_j.7D|G1h5TL5$ /D)JDZF/d>dRb,&gYc{cS>f!;&A^5OrOuD:dZ6C>V5芓 arbrSDw/;[E# 4S P'R=InJ2.Dh3s]|5Ϛx]tlU0CPX!]6dmh,9U9Xm߫2ڢs0ʥ1bsclQD.[A r"-+fCa9 m5BO"RDǂ=q-g^M}k_"j!j6/%C@78ZxFc- cTZxTWp!BI CȆKb5T&X BNŖ%:.& TUxaUd[-`pe˷4ޯk)] A*g=wTlO't s/gic#;||zt{ubW#T9D5$hXW5009g=Gxs>9Z||hR̾+iSXKTkxlcAX爈\3L }9Pz]φXoqhK65kTBRS6z)`Rvμ=b9l:Gv'A՟gt2.4ׯՓsB_+z40  Ec-$Ndse&Gz)*39Bkxr>3g_w~5K:PT!gYx]߆F8~\0nT^^L6ꙠB6T3N]5eӼ &{Nma'bt*'VT^+8TߗS]{fnq}pIݱ|Dzv\bA )Jciъuaz I-0"1PFӑ䝛-dRCI})@P70a8OZe~>8>ATcfRS T7j3oK[zz2=P 6+0(EX!5(VLņ[r% ҂D{6-D`ad+>T|RQ(B.`ZM l .bap}tm,`TC~"UGUWE5_Yw{̮dԥA013Pf֥ӉɭcT+Xk F(!fK K.CQN DSht]Q9q,2O+,Yj\4.SL 薂e$`>TT~= ?QB::\^o-68s5q %xbWBCF 6pS:\NA,WkØV/cs1jJ$*{YBr$FOs!AԦ\Y|lF>v]sZpX.{AK:ҁn6t~z|`rAtiDml4:UFanۍ}1{Rg2$X[J]@l Tgr#Yzr>pgY^P|NӋ{DȾk{?sҝゎ9Q(%x7ty>{YgZ:Jg-~QCdꦯv))3:p_0.Q~!(X&Ib'_Ņ+38g^-Tͱ1fG.4k7,d .6.zAX2<)(4'v^ VjWt⷇~ySc8^5/ lvB@o! ј)Z^ UZo,0DmvOkg$ܵ)K\dLEԟ( v,>0՚Ȇ ض]KMՔz=$y=Cf)CL/wcl1}?+Afz=$_2(Rd+%S+)(r'c9k$E_#1AH)r )6 -dZMl:UftOa9Oח7m{ֿ~{R;8ŚT |#f NT|tß(*)&{l1[ӅjF0Dž9Ζ9Fы[S4ٔ8IwP! +alil3 VRZw$@JEP#U͔)(xa99n*|xqs}`plOp5w\m >'|﫛u=(_2{ԛUS_^ݬϩwm-ݼw$NmuMU@hY 4on~'>zz;/VNs~&s~&:rґZ?3|g7r=t|w'u~]16Lm3d,wzCi,>=[y~C2`FQkkr?_Xkp_ō3x$7917O2q'>h &^`baL@W~">*MfWf@fAmEfSrC'$Q!bdc[ȑpظBGS!{NA<&?;۸4sط&KgLƹji\ZhJKd8ZWh1WkUfȕD-: +lBrt;j|n*,׏N/ϝcclǽvc-ބˁ4=T2_J҆f{] Fc.ed5#A-&PVOl]p}LG˜w.k?@YTmDM1Q)ߣ=hmdOٮAM=6n4|4KRJFA5[./=9=:햨 ؽ]|2O,`Hvه  41*d]r49e H"C$+k), +pJ+za9פ~۹ƹDa܈GDD%"@P z%5H$&F="i`ZC#b `gRn\!0ޔE}QW5LڒNVbyæs8"~RriNaVܸȃ"/qq{Eg}5WpFrcVDGFa6}.|] }۩|3t^ |Sb† -6H,A]Ki&S-D_h;tYq>9y]q59S1?ui_ Za=c)*W!i'(] 6hKFx .HeZ}ڲ!imG@<>7$DcLZ*CxpHRMf8Pd <Ҩ2vC C1q{dI7vݓZr~sU8J|Ro@T}QB}Bt|"Շ8+~ }\Z‰s^xnca]H.τ&56_mjڵ.i4K-\ϽKx/?9nPDXb#݊Z0͗ELdTL+Mlop;u6r",oPq|{"{3HgB̥^+75}6~>7_}Bjlъ?ajSپ-$:Ll-ͷ~kum0 y,CYZF1+u8Khni&!Ѥ lSi~| ^!N M]&>: lo{Iad82}ݛ^&t9qX^A;7\<|m].'K xyқ9c禧 k ]!\+r+@+i;]!J%::BbJA2+,Rec]Zi PZ1W]qe9ɇ6Ԋttut%4'TfDWبl R3hOWRȎoPgDWx}QgIthU Q4w5Ɇ15T%]#]Y?vj C9m!UrNh GϽ0 ZZƌ.kX {% 1Su(Yc4zӏꘂyrA&sUBe7O4j N=)Dy}'IC.EP[Od U*֯hcR7~i$fDW|}+YFnMNX|~ƺWt(9ꋡ+eӋSb)!iںj4Caʕt%ЕjצVp#3+&BBWVvButut /:T&EWW\ #+AhU#R "\ir+DkD P/utut%U*+%tp υ9T{3m[E O9RdCWW\ NW뎮,+SA tۻB.)OheLQhH,md<,+Aϫ[Tw*텚nsTS\MP>R*'CyQ&Az->njڠ%e Y!8 R ʫ$bh6 5ZIZ<(<#h 3/q( K ]!\ ]!Zb@)(rψa2B"XGDZo]!Jӭ9t%lzyJ85 Ϋ=0]5 s:lF()ml@W]r͈4BBWV#+&iAhv#Tl * ]!ZNW fEWX<ֱ\ ]!ZkNWRκ:FB2+lʆզt(mg]#]IIV*BFBWV]!J::BR\ Ϙkf3hWCj)]!JA::F,]拪b nE/^!5n0X|"k&њOJEl7~i5DeDWXl Fh-i(Yg#]Yiv|"k͆-gt(yg]}9tlzuG[WC[W:.fhCkRlϽj@W]*g`++긴dm+DyGWGHWL3JdFt%Ɇƺ#+%iFU2L޺Bttut%!gDWDfp}Q3sWt1ҕ4 |Y4B+h+D;:FRFlbmX|+{mj+Di:g(2}AK¬˒L\'Q[n%WPȴFF-Lײ9<Ө]#x6V43pE6CV{4Rn=i5k ]!\r+@ʶ]ՋЕ4_ 45ZC[(Wm:ʝMO5C/o5k<!0t e. Jwtk3dDWX ]!\&r+D+i Q*']q"5%]!] Zڬ߼J]{R!,IMAk7 +Mbă iӥVqgt̥_/iB`7EWg#?.))uL/ТRRƹt[IR:ik*OKj1>\Y)`c{nEE;@揦Z]! vMW/C0n̿l2 [)5S潪b[7X9X t5/FzqB}_ܪ{/'Ua5~! jl=X2l'W}F-a-  'A2 eM+T ?nPi{ѻϱ>BV" 0].A@,DB#l<LCϭ m5%S4LW]I?]<n oG 0o>%_Y D(aUR4*CxYq ʂ>g\=Zjw0 }cy>)xxfP iefT6JRf ZcEGbi' /W-$Ჭ|t|5;[8z1/ZsC" ¿hPh ie fHZ஖<(%i!SSDՖ eV1ᑚ@2%bl! be.LJdu Ĺ66qRF5qUa H9 ZWC}p:|{iSӕ%{h\y#8n( *xMT8Xե> G9B /K3{RJS:TY먋\88Hd<3):5c+S%Oܞ{<ރY_@?F4]7_艎囷42*NP,Ss&ۥ0z2dULʇX5FkEY,6mŘIsֻ#&\@O>Y9`DC HFnF~\vӌS:c!XXxmէƌ-#Ыhobqeqz@ԡK֕QR`f/ؔ\TCϵ)}kg5鄲:8˻": .vKSuvӒSquE7⌋FEY[R׳䯨$TȾX]czxνT<Пaӻ_kWx_ܜ<ُ"/@^alʽưMZ7mR :stpTffr**:@`P1ND-v'W#& 컉sfgތ]hJ|~wDێmW[cqɸD\,?+MhSS RX7WV|qIHI&pg<WxQ虑!OեUD[.1l'F\C&/ ] 4Ws`.x޴y >KEdغZE7S:PI3 PxloW]ѯ&cG!~aw_̀}|m٪kqlQNA &[ `av=qLf5')ݞӍLZ8: &>K]UuBW+`wzW g~Y-\R-ͩK%._/YcږD?3oWӺΟ/˗Gyspڨ蛸߶zzK#Yϴ9({}~=KYOs*iB6ȉ9tޣ*]u1ۏtA0G,u )8Cu3ZFֵ~@o%ceֈu:emBB[2!lj ̛GV(Wh Qxuu|KĖGI|NnRnTIuG|u(_ b~?\zju=SݻNO Έ>r$tƿxwmQ%Gd}u$T^x#ڔNަ2^kaU>}N Nqj =}y{(Ǔ6,8۽nzP<]->ϫ=w%Dޖw:Mz>΀ihmfm{&rsEּiQs~NESi r&p]]%eV  QɊBQ[H˟r ,}jE(|D+^ȟ>-x-.g_ޕٕ6/#pYӥ_n5 r5zQuK+|KkWbwfZ]Y<ruFfX̴iY7.]N=7oPrťv. 2^ CYVHߢtfl$84'[K{Rsj(OSH>dL 4AeSFqb)b$'7Z)\>¦X!e&8X`rPܻl7qlpzG4I [ŧ<6~hؐҞ1Y rcI6%Jc&["aJZ;'QBh8GqPo˲lN^kcfpZXh;CÚU<P!YEW_yښGdtധB@+,.N5(\&ʆL15Ά䆴%gtfc1Q Ukt)0Djm@b}Tfd6!RAN,1=1/W-Y1Ec~O\2FF &(z5F%98++~' ZO81:xH`4p E<;~[?-Fau(,pv^M"ބ-o;17T6C2) U6K sVͫʪ^^|BMKz 4XrF< Hqz(!M *s.M\5ьߓGvC,\oK}?~*;۶s?SzViTo }qX|%&CM"CL6Ŕ^>G3E33ES:IsKKAhCV:uh!+VUIo\0ԨYD@b09d% IS$)+o#E-å{sLO5.W>RVtIoSWT9yl>3U}AE`%x)L}sϳYƣ^p&TTjsN |819]Ĺ*Nhrަ檘JGu*Qr6GﭪPP1D8kmRT}G(\9"(m4lMlȊOAӅKxWczsn:.%GR*d.7> 5TٜV9_m?i(ިE.b}uH;^yX=MbxȤ3gsUP0$:*yV3 2h) &>OI G3`5WZr NLRrۤ4hB3ǒ|&˾1;IVZ~'*@jQ%۬"֚bH[VSU޺JPrA6\sZ+8&UvM'^I8gGMS )ڜ265`U\đSW[3䄽yZjo/z5"S3#C1f I\R(x֚}H] Ui6~qMƆ|Tj:g ~Z5śt>2Dև\O{k"v sn3d=لMGE+C @m)j`e ރO ڄH98CMCM-tkNGZ1`Z"M]}VREh~9po۶IR[4&f $W]Vd2$\uJN~i'㝺s B7zBZӹ` 3v5Χȿ-os F u?Vxq|(SWt3i"4\]~\$43Pu!8CC^6o\KJrty`d J+_/lV:SgTGT7ʛ;?>_[@J5كWT" u,(O%~PܓǺN9# ۬\P̶N:UKtuJu9 +}f &^^X]@@n58o*^?$Y;r%ʥ*r1PU^47%Kpbe׻r<9 Άg]Đ:f|z%OGduggCj%|$'l%-Z[HIAE^L:\HZE\!MtHuKXт U y5p*x\@]w+]Y)d%CF1Gqx,"5JpcyӅ6J&>9IrJ=Sbf*O?^+D|cݕ_?[2״7**D|w/s]=5&]M [۹cƨ\R'+mPj,YJ&G}L2rs=g ;s#f^ONe\-zaVslǟՇk^8Ms. yQ?煦s^s^_綬m B& :DmW3 IMPΜs"&WmITH*+I>d㭏)e5g¡2;J`opV_gG iScwMho3P5*M61tRT`3wn rP I)bQcOΨ.&8f/2Qb_p6 S) (tNɺhh:cސo&d5hJ X(^XTS"\]Z¤".&r5DU$i 'aL6Dta|_ k㓏 Ǵ+!VUlv ']Ёcի +em9muqx)']ʙԫ.zhuj ҁ;JKV=nyxN٧Z:?\Ays-X\0A1bɩaڮcu,XyێEHR>Ăcxޗh$V)&QH+H z <ۆd;-!cV3v/?_?=gvn 1ɾRܣK[]'];9Nh7ȑ֎ҕ7 Wj ].K=jh:]5n#R윮lwwVhW{Qs~ ]=vɒ&7"j ]5Ώ*NW4?t=+iy[GCW  ]5ztPZ1ҕXZ7jp ]4tj(YMttՊ]5~< * ]5ʡ2ՋЕvLΎX pUC骡4ҕqVz9"`e 6ލUCR6ѕ~%s<_\5g:G[ﻻ-PuxbZ>fa,S:EIt>]j'9 \d`Ol#]?8Z-?6rNrvL #*twvh:]5C+-`֭h |2)Jʾl07$HVjp.]<5J?djC)]^;c^ e5`Ar4t*?jhwr(jJ cYؑ ]5BW <]5^MttEJ翶nBۘ￾j5.Yw .R |nD:^~ q~__D.qwyuv\^hư'-/Qk#^oGjX~|Sx/?by|s3P2;ݲi8 Y >v5sV~`0o>bG'h*GۿZgpt|<RB)P鄋<"9a0<1hFBu46J-)okȭj-ժuʱehn*rU@&"..ġQO´=䲁-=! g0V) 1NƣoTL&Q) ִמ&ʸXnɔ- Z(ɢUEBT&aBI1V F7iq.Wiث:[65hI`p6pN A u^x~$iW!/,u!LZ5 ka,c>bPZԼ*%V62JZS(JRUy'1 ))(9^kIɇɓ]v8u .1c◘ZJx DX#z'XabaJ "+З #.&%ibًkŸȐu^L*%Xd%!KQ)/3TCU6DcF Zkϒ*1H̎ !6.9Rż0I`1_F EcW:3Ol#'4akPQ@QXkAs(SR3WNKB^ArXʃjMn]9 ڢQQ LU&u`s+4՞7͊|5lY֌9&*.0`ZB[LJڬlsQ%P \r;8ؒ*C 2u` El`+ Sꬄ*0W K%c' c l„ .ecd}Y#DPT_=$!SPF2$_utYdd,)T^ ߁02cnTzCJfq2F0o"0>*  ˄тH}e2aIDsʈ5FiAxkg#WC7wląmJMp`PglqXV d3X̤|%!L VJ (90P@pΞ]`)XgYXbz@_1}!S. ()VH-GW&b*VԥאY@$"FJ&1)x^,.zjeblz?l! ޠ^CjEj\ӤDufpܠ\`m+f"҈Yo"磊1͋B$% !`BDEffض1AWfפRhXKez?oj-+赫QVeܦHB[oP\ڏMhJ)h+>$ @+XȇTxmBV58 eXG4/! O͗Ue 2inuG-=2q g,:?*j`J4%᫈;+Q8m&k)uY+N";XA}4Owy4ЧBUF+xm i,{إ)Gh[-$Q B*$*KtPK|̡GP}ֽQ׀R"Te` PJ%l-ѧրhg]+y r0]Zx _3>c &$Rje{8}65>(DhP?(5(Jj 2XA2 5@J'2(;PqU߸YaKh*TOc=lE^QH"NdҠNn A|? o^^ѭdp0) RTFj!0TuLch\GJFdUA?FтΨ19Mk.,~F35֨Y 6)jKќI22KCpR 9Ok[id8N֞ +JOQA,iJ M\ѓFn5/E!~c Q}6ՔV å!0r( ;fl"CJ,=vR@Ւ-0 YX yۨOp]ghvECPޙ"B`j?ozBv-n^,V륀ڰXӻ$:o0˨: T vO(N_|{jN!5w'|g xcd??[I$-&_הbn( ff3Zc˯&W:YNt꧳mC 67|XpW\*& te՗R,NrvoմiYQ.&o K7)gwH ~jɛ4w&ƻ@oRom'B̷_(yaߞ6 }80Ofo/G/&+5oJ,)hNb%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V=_%SR):%}QpgJzJ b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V=_%FSROG dQ^QZsJ X,+9V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+Km))Dx2J /Gޱ9*,XV@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+å^\Zw/ԔzX^vo޻Vu>B!p tI.\NFk;zz`s.!Z  62 \h~\;\3\=C?!1 q:p2WS+,WWgR(q W27yqFyQV}p%W]O]+_>`7U׳sm5_ft!۷W6i>ƲO?s9m^lټnGS\Wڢ:lL7VU6ZiMWEe ׂ9˭log6es I5p˻zbg|\G,m^}͏~z?^f1~Lij{sgZ|;D.jyVݫZRNo9v ؘJiqQb};[[>r}iIm&5싨Τ gR3yfv`zyo6|;vo=%?bK`Й`Y O B() |_U] Ogx '3<JxYi<]ܼu:*`1LlMzZ\tckZDM";P?]Lt$Z٧$].&kfqEr:4ϛ%3J_??M: s!92&.h;tL_Wv9mȼ "S7 _v5Cc@_l/A>m tEC9՛VzheH^[Aau6nb?ܒ4Z-8iٷ@?.~P=ksO=o\L>0|ZjR_дA lF9bR@Q"zk!cի6NdMg:ˏ7NpS2 n~d x :]۾Ek)"vR1*r1k:tfcbl쾯?wꐧ pMyO(]%m!E:z.'Obj_op>OO-򻑀v(`gߴ˻0-@6 B?"Xnp' 8}X: (p%\us0Bd)M](/8_vlH+qv̧g(9 qD3bqL>ͧ=iҤxӌy $Z5D/l_G;\uK? bNqˆ& a͘HD_=,_0^ޠ_[:OjșN&. %stc'R!ѱ8Qq4*I\'m/8Χ#w]Kj0U)RQ_ѱ9KSe"`8Fe[]_~$l JMuO䘵Yחy) 4ݔ=e!yR gɔBNd}/tE5UpqD]_5M^K"| 14ZHI5D ʘ^^Tl.t@H%>*JǖueqӱDh#G~KJ%t~pI7\m.2#WlK+"A Lz t#-6y,fgJZycˈ`SzU@^e4>gʓly Po}߇PkKM\q C,NꥉVP,b .mԚTJTm<h2zӑ6WtEB_#YXǨ}'Y|*yAMpŨPD6'_<^Cin$GEg @bou^6^٭i5wHJ")P!(T2^VY'c:cB6]UމL`6+R RQ.e fȓ11]U'`ZʘN6ᯊj{>,?謁KK.o ~;Aw1 @LHb^}uk/m{[M1Ytv|+66%-vFَb<tL8 a'@) ӻjA Vcx%U0 gUKTۚ}*@b $}d2E*gx^*{\kLR?Wr_Ek˴ΙZU RfBN960.+_%!gno/Z1ف1]3^TGY^./o?;+Сws>}IQ!n۟Os~-=1':yOiNiE8+DV{>’{'ƾ:Zn?V;_6P[_÷@O_M/ NǑUz`Su5ґ'ECZz vˇAkE2{*yUd%o:x$ ۴Z$G)`U0ցuIr& DmKD|*1I"ͦ?hU%r}uyheO&D6,6e:QEpE+3䴪vMUݷv{z8WBGDkoI&.l0}={\[8:Գ[$- OO6Nc k@*'H+y4H衜o~oE#H0{(YTAexƛr2Qs1TL(!dJU-ל$T@F6G̓o}_7W9LCݕhs~~8KĹ=T%k҅TxQa etŀ|QhRFC%dMwIxy5}v@{`}tۉ߼aܤޒOz3V<]HXZ:u :K& aXm,˿Owk?Q-|0=1СU JRrSøu,V'ޱ  4X,3AꍬCY؄ɛXBE!&X(8MŷEٜ}FEIs (!ڧI[lP%x`lP1m6YKӡphGn {{vх&5?ߌ櫧6@?}?b-}xݛޏYr3r?m?&ߵ[n$wNm~>žxO,g_=[{xAsf? rv}2_#{=TVx1kĽ鞁zQn\Bաc bBy|чc^2k˯SS-m %R(H#ܧe][xdg ]l4 '~$~D@'8OPwy]UX۫ 耡O)yJP(:`)bs({k>OIXTnnv]PCEkJA>ᙔ(@%yoLѢК(bS) D*ض"uJ8a (.f]Z?U1c`# 44A`غtj|ӗ1,7qg`.Y'ĶI` mÆf?|SAzItJ<*F 9 PJ9PmFJdGT)6l}x &@TzrgM|NT͉I x)5Ngl6ͰJ3_le />zsϊ2(_ @}_˫ˋ;BdI6v(%h`91!&bXV猪wS 6TlW@ Tu`l4 &\etpRcl:=v5y*^8k^[O^{BV|$$T6,V^ Y2 $5*Xi̐d-"+#(TtIu7ͦ~UPxfPh{D3y#Z񄺐Y* (U(Cz I[d 11E#:B̍F+!YTFf2LZ!Nː6kMgDaܗ̏:_2N)lmch'8ŝV& cNeğ)_P|K(E"X10VPH!w*qWܔJُͳr%gtmRZ . wD1AZGLPNQ\|2~:WXoOdΫڴ1hsJY1BNBe-Bad*B:ƹMY}1QY H )p*SrFyTdII("\cl:#MmPAH-,ۇ "AJL(2>%ǀc(lb%/PF0 yȦ-#:aa J"VVOM g&o· #>g$\>Z;j3u< ބ;h !Hab$SR(IK LLI7-N%596K\Ϡ<{em@~%b?#\Cώc} Y؛nĀ]Bţg͑tJ.F9(o^.gW#"?{qS䟶 glcMns%[qm;/j6?98{]0Pn aa~dO7kq~[ߋ iW@[WEĪ0x&n7x%bd.&][,bgif#&g_?i=ޯ^x6^)(\&Pk :bK8w\Ŭ»mzFrx$.cqOZ?>8i޼QH 1cRR£CtjZ\FC)2Q;C m5I]{r}sYKmV/l=W@PDI]tyl˼ivIG8uE͓yWnTӉ͠[}qVyQy(G cӖKݞCU²9ٽ>ԆTT#"DJ]%!D2T؅&EPe$jxFlQ>pi@ԥD 'P ;)TLL1壶4GKb5g˄Buk{DL.l6:ϿⅣ֪mW>i2ф TID`2W3R$* $v@`t!4ÁB&vw0aMAGM!5gRp5y? ,Kƃv"pL*MVa NH&hȞAp:QԌ@d7X1XBEovT⃦JO: w0w%+Qc`֓Ja,(79PaKY;q?UJ}7z ݔ ȞnJڰr϶G?{b33c%;c7Q;斝&˨ =@ަiK > Tuc`Bnwgۺs_nVykgi}oVjA}_#RL,ᣱ&Bn`[FEL206jj7~^k"VԧMzofXsO& {SxʟaFV.{ǟFY>tppƁ؟ Yv&Dopcd No? W]WJԶymPhk]kdiZ'Ҵ4Es&24E.#ݎ(?:oF0&޽8?}_r}\Ku0Lùu0.jdp@Vw/qVW \|Z- 8O-Qς{ύţuy#,f_-jnXC{/՟~UOly׋;=/Xvno?|f:'sj-'Z{~,YjJJ+W}n]Y`=m]jδ2i+AYF5J%ׄYߩAgzmݱ?=a{ai^C-_$N,dF=G]H+EJN0)(E2v^ЋO`G,/0?.zTmMoGl޼w{rU<ᖉR wAg:Θj/bH‘dtPý7?wV%K™depQAq*;< h4͛ W_Q,m1βwlλB?!q }W[mqҵJ,d4Hj`,/UB; ۃDUDv~[IhȕV^z[e&yA,zĴi>YuZ=jb$VVKQcsVtqm! c r-3mhEsQU]!`UR ]a.;m+Dɺs+;Ft F@SCth9mv(Y߲%QӯXS=u&Nh59Q&n(yv;j߮w`+K)6 0]!]1}܁F k+++D{P P"::C)c +Y9t(U]#] #9aVBFBWVPvBs+i BkG7$e7%]!])K P[ ]!\nK+DZo "JC;:KGμ @Vi:rKT;7b.+m:hf9RSqH%|[1$(\$g*ֲU hű?%8k*I@ޣu75tމOĖ]/. lbNNW;>wb7Ya'rrmVЕ؁DGWv=%BK+ki)thm+DYGWgHWڂY ]\VvhU+DMGWgHW-iAth .t( JP!-dB *zBttut%&$6Bh'+D:cJQmI6CWWR Zv؎Α,Ҷ*=mKbu[Ɯ3𿘓HvY-(]:ny*fHUEiAGVޢA :G0YVh6TBW۶ ,`NK#`+kL)th1m+DI;tJnON.էu `'$ ' %ger]+ق[U ]\EL)th9o;]!JI;:CbZ +&@!BWVٝPj::CB0]v(R+@) JC(0=yhnpM1+@{mvCi:9ҕDҒ\Xb JQ ]!ZMNWts+iMAt$UB(ק؎Έ,SBeրݺ6ر@rB;#oGe\KJ[ + +k+@ V] QЕڲ%L~Sw{bNh9=Q.n(n]JutoSͩfCWT+CRoLB?OnRݹnր/ս_ūQ\=C]~? ť_?Ɛ$hQ%bE%TJya9xx4 p az4_g׋X`qE>rv$r}MA\պwn;qSwt9gG$*F+CR60B2b92Z]G~K!b&=jBS>?_rrћS«=to@߯[C aѵ;zvv4-(.@ڟ i^ FSYoo޼ʚ:x߯+~o^zmeV\yMŴQy=>}^CI ݂d~;ZB_(\ƕhNo_ @7G7|xËplx5~B8FYy> /o=gVox2ۨk~O[}?սۯ)|t|w)[is¸GCYi]W%q]_]Qg)@!"-Qe#A+EJN01F-תfe6$U1_]u/.IsrT01ϣR 21̣\0) OD1ebY\`9ݚxԳػP$'nP,.M#Y͏}?n?v^.*#ݸaVP֦zTmMo,漗Fjɓ5f\w_SUTQ~;|Љa}mƩIdl t&J1ÅA4,j^'IAFxwM0wV%KQp&v茳' ֌ɠAq ||?g}x<MyWϞ7׫geTKΠW͟Zo Y}wRM<~"-/sW2+A^/DWE_,{|oJUmq>eծ]~wIrxtut_<Ϋwc{2@n=[ ~a#GH>ToX538<Zk>Z|ثšq[,o鷏MWл[䛿o^mec7r_ Vo߾ei dg0;Zw&O.\v=rݏe9$]ڻJCRڹ~sLTAR Z#E'*u)G1qr[="Oi|{8Wa^w1TCg icfóDh#H'%Sc%cTM 5pA`QZR6k娒L)m -BO : 5_,ii}wz} OϲINV[ u0+Ye&o^oy}%†fO@10rT>&ۥtD"5XQ5(:^ Hl :)VRSd*ȋb'>+b&_ d%!XZ#c3u#c; ͌Sc!LXxV,|6g2UZ<2H75l|MN';F&FH1'U F'MBbt1ئ.:{ K9>!XvH7FfFl-s,lvڦ1j 'o'/2k'R9dYjR_{&Zzh7Su17Cffg9k( "+,G( "C"S]lLx;_rƱ bq*"ƈh'Dqߌ`)\۷P䭔E"f.3l(D"vk!4EـX'59P -l٧Ğv;SŞgcDlaD1F&siLfT\tqM8^ߨ6YX *`%yLjvPuECdv,p8Ϲ8}c< azx %Mُ/̎f=#E }sꆽ誺մάj;^tU˭9;uz)/Yc:M9ڿhY]YV^0ݝ6[~r#|?e0ldmiFxs˹_=hB"!(BFv!e= @ =ْ˪![>5 ;G~QS=>-8  7͸%7f}k~zZw}-aOc[gVxGI%obVDHgJV^$aC9xF ΂u.&`UAb $( I[2idE1 pJ}t829m" IJ^flZY^( U cR:ل~B}_#v\_Vq#7?֨X`E]}&}jƁGA1ZD^A S,IJIvUJ$. 0?4 @j6s}H BτnaM0t?%6k2/IY8!s0FkPvZdH%u(>@&<:kZ'>}WīEx}J;_mhl 3]WO[z(R Tn@$;%*Rjm?\Z_;o5[pz5E.폶ϴ8v}A|GOs*4z|DLO]jQ컊`D]#.@s#4|N:>.c@qYr\͏dH,]k ?77}t>< ׿;AkYւj ܫ~Kީ>pQ_V\^Dz\Dl^x|ܟoO BT{(5)"l'1ܗ3#Œ)qT#{U-BUPj\7{n̂ 3va@8q3_f *xfҥtGJ 1t%֬j7|N<~?r{'ϝ-*"0HNf)-_M/#̜'>|H"DeQlRq`( 5H&M&!rsiK_xݦIPBS tO>OkmKLWzM@T.E^~O{Fs j$c+Q=~QZ(P `j,jU({~@nurpcQ.F1(`:r䵻rtƍgt.$oisjn$JDR.";'LL@*A.ZHtbҦndY*>7*JǺɖ22,!L#tD| WMfŅmts\wbϮНcW1̞|Jh) K*Ai4 V>K6IǠ2gC:E!dBγ㠽֢/%/DO&9[LH4.cѿ16:RLG/V`ȆR*E0 dr=h Q e%df2s {5}ohD.UIJ+PIy3Rˎb3NMI2R6lR5ҿoq7׏j_0LlÔw|v8y gq{3S`esZxd,^^-&+Ǟ>u<.jhM;5vuo.D*]}o %΁;dž{:s6nD)-FNK#skA5{b"d!c#;P[ O/cgAD8ȫZdE],.|q߉6]S/u oŒե{&(}=ճOW߮G}WviLm}<׫76tm#=|~_[mh֎ZËĴ^i"i'qy}~v3lEuw_-3;!1RkXEՉMWMD'mE’&d(M" 0;F.4ڄ~qo 4*}xM =kj=耚xX"[۶:b Q=$iNze-&nyyƢRv)t仈6v=h;þc0}.K0uA6HSD&`!"%-!$j& . lx*>Qf$M Il !LXiK[sL@Mkzv+>Qm}f~vj1Wp ̊n_oWZ>5*=&vΧFΣóTMɟJOwobDMLerAJk@@gQފ"5c ђ-Cv%9Ƃ-h}ֵUJ9!H2sZIuA+u3ʊc*2Mʁx}|HgU@#*4:,_/w xvVlhڠ6]6(]*2hLrGj# =!-btvÒgJ  XEbSCVuAu mEQvJ+䉜; |v.ʨ |P{k1EΚsv0Ms=2r Tb*"8ɪ(xC작.2I' Q[O*d-M}_8G O|xE$g>A ׎, %12bFk߀&'J'M~+PR2LĬ4j)u1[+Z\% hPbiY.{3GUfsblA3c鬜Ku|t:$VDH96RAW1- #"ۗ&c1fSdtT;bJk0u>*kE cY `bw%)d㎌uX 9:tcпbˢ.)l` C[1DfldJ6q^fK]V۸s׌şmj{u%wm dG0.p`7{vAJѵL)d^H=,YER$qS=]Sէ nvo^_nyZ5q*ǧVX w?斂VId]zrzAO` z6ēyc^.X0u6 б%cj0=rOU)+3PJ\"#(ibIxi554$E97wGUZ5,تamk%~- >+[S ];S%[F fOyV-.q..5d׎$?ZC0dCTg_3Kʡ=2ێ Һ<[Sol4U>i`܏'`5p;iGO?L{xtۿ˛?7vwP W SaQʤ\CEbŹ$@ɮTB b$E8z=[3x~fP쵞1P~3h{wRaQΣx>s? *Vv&]Fu%',kg= OJqa\`kʒP(HrU!y] F 䘒o؋)ayu Ԡ*Bh1VRx4!'rve%Mڑ`<4I$OK޿ uz`uJ'['n|i %gmU[YTrhd*nelk*>#XI.QE{|5']_Ϻ.3I t_z'[uC7H>^n-釀dΣC:+T3tJKFLuM*3TaJIzw+mkˮs[m['lXrmWe~bB!%@`!&Z2<@6FN9b+r e=ݓ:o?zHR_Oj`mvLa6@֮쒘c80)ZzE]tաjgl9g{|=>[mb:)-Y,CE%1Z{ZP`g,&і+t=+Q;@ *|t M d׆Ew_oc^naߜZ'%ޡ:i}bALkR{ro'-/s7WAknzqz[q۶Uj6xl3S> [vT+KWK] WC6B;C }~w6qݕ"-mۤvl wW"0qWM\v]5ipnjR+tW.E!wjgU/.튻irڍ5+2mS~vxT?Yt2T|Z5ψ{9&Q89WW7qɜΎbLR7ׅ.؄-u;x~b`t]Dg^ثe?)ɉbcKBcm2ɠ'yI~q.gr`FZ.D2֤&VQٸ/8gL^Ie5Qe뗮RM!IRL:HFr]\AOWSOƓݡƷov&iZ+Mօmh>+hZ.S[Mw]Ԯ&ޏ+"쐻3 定3MZMJ7now8[@oo=N`ljfգ՛z]G+?'O}d/wڴE1$ΚZʇ{Lǯoջre/%WU*.RsI{ -7c}MߎO/ayr~?{4<}Ϲ\VXY+?VzW|gyf}z.UqE&p:ҕHmi']EhN6Uʬࣤ^e?\nJL+o]V{1 _8?5}wg6{ܝyblrq=|q`o|nqwԟb/Sr |&0q]MZG g֑62RIKD]9ut8 6[N#HXQkJ˪/1X`_g1HKJ)3ʊ" K .j..CN.TJDJiނeOF' 9M <85MvD/,Asp-bc*CaC:[P|*ڀXLhWrR[EmG$ɬIGPRA lg@; 0WU>n tFK61"tZRq&*yd'YY{ea!KG>lQ?Oowo kRUJ>/.)SX]JS5,qcLZh̓<g=qus:T@rp*8ϠT4&O43A R cVOVtwFY,:qQƊxJLOz; esߏ+Qk3-) C梜NEU9yn *1L""V߇N&z-7l=<$3JJ5Dǒ$E 1EUUqΦ,U3QQ9<3%ɧm4[EY 9_$t-- rwQ0~U?U&htu'+z}ׁ P=Q,lD1tlr됌P(ɷCS$Y^EB2sT%Bp)E]%Q[yH \RҗzMV۰z|Ǵ\x.AQ %j4S6K^HI%Īlkv(WZ}2llgK ddgT|r[ubՁmљ>pyK^C?:+t1`t椝nǃ-FZ5Ds>Z VI߾.ʻa-kor"ς`^5ܦA&d-j9toOј6dL(fcO`I$L5`IZkBTJP9ahLmLYk/V,qaL[^Qٻڵ6v^{+mIϖ] lnw{GJ>^p`BiKC2h&,kaͥumL@j.>Id<* BgU1lLN~-EUO9]_sH{E}(QoXI=NA:VLʀ#!!,U}5Kzd0֓r*aheQ~:t44>'=xϋĨw!.cdsܶ {#džx}=lusq:rsuHOZ*aB,2TϑisǩSw:Akfjq =S) Vb 3g^pgތ|Gg޻GI{/a20q-z2}+_כս%>(6ĜnIRf1󛱈+3OO>0`>YU!r}!O[f*b:wM"MFeAaGfP ">˖P54G5H1Md4NY0- ]P DP k1(qZ߰yDdlEͥ)7 Ca쇪zFMY#[ 69g8S/p|,m &#~ߕ}xDUm!ODo ")D Д)y$.UY0Ցs>7T+8 !FRߣBMa4͛^4ȚJ&&R4C!V)F6gF UFrRE!b(Y.$i0fG _ph;5iG^} ଯn]tt}mx]}w簻r}r\#k "uNw:H\SEEQaEu@fr?^茠3O>woz1D8\_t0=aSsML ,K;d~t\)T-<.UˇU-|jc9ʕo糫7>`;҅|!RɾTBnN6W^6"s=K0x%(y. 9n8>ˀ M<^ԉQ#W'5pqT5*i~xTr 5kqS,VK 3R P&2ւ.bEg]rLK11ХU$S|ְ6J @u]F5f|6jy{,b9̜{^'~'H?ܕm/DNˑ_+z40$+%39MX; DRe&wdj*zBmChOԵi=Auﳯ[a7^u)gCβ&񖅻w$v2ח埋qw. `Ѷbe2 Sr=Ƕ0#1Uly~0&XCoTj$Qܔ YHnf.7lg̨Y䆷l<1dz0VuYAFqcWY!UT-R|ixh=JDka`|8C`s^ifE/XُWgLLHDICWN{{T͠SsKƻfń6I%,\㤹 S) &Jb_b:OgolK(ڍO :Bj"lKⸯJӧ2m#N526*c 4@5|_UlO1*E2{ `)bVD@ISMIFx ͧȼr)2|PpܪBRkf <` i΀oPB td^iisͺEaz-ӓވP%O: lvB@IK13 Sp, U,~?# ίͨ͢?E\RT0;9ko :b/Ft5 .iן^P{\]}e/v:];p?`.S1u|+6f1/Ml1s KMTb4V %ɝRKb5XC6V(5PCKզ&I2;GCTj5Owח7m[aNֿl}=V[8ŚT |#fId FTajƹ4]ù5Φ s\lc0|WɦRDl.,l4.X%F:ıVO͉$6ZZH7T ZL)Vll !Jq0sd5^\_m8E\M!a/Wwږ^??ɟ>~{ fs,|/nԻn^;ȝ.0]vf:۾GOzѻOb?d>9f~]3:x7@ޑC'wίk>d+8l ί2۷91!;`^og,)75^b,nMK.\I(%5-}4eT ho&[]X`dz}"VHx#Z6JdfFC$RKҚSZP1x%d{F?bˋrb%jύ48jSXlRBIbu\RZL.ZczP9,d) Xu3<ǜ,*CVddkkJ5jEuGQDLQ1YIu8^{9ߩ`88l|<7"/qF 5[*mf#bF=ݪED,j`Z C#bkgTn\ Bat]:2TYC)?"~TNWwKsu%ύ<8.a Z .j[}$\[Pzvm YKpK\賞A"Jwq~۽gٺWНw5[~;?y߼^:.:-xL'loJ)\rr[2aU(ș%"%"s ?%07K,D6/#fu`e `gD 13 ϥ f-]\  38R{%K j2<=0s gKx:{>lܿү+E/?=}u#wnQ߬uu\|V<8>FΠ%5 MQWJUZbYZ%J-gL๼Tг5,Ej/&ՙL`IjpPG+ɠO 'YjFuHKy:\oGz %̆S㨞̦xΩoDJޓ6mdWPX}T嚚ukO*·D֚"5<|$"%B(mlwHb IQ8}ȭ}4jYAl)oiD_jB](Y ;)E][vgvv#} & yxq]JRD`kM!h^SrQw)gp)X=>it:"4!8w2ҋ@ <Sk.CIKh94 n> M7vP 9eJ&U)mˆ>-~B_蓪K搬mH-R 2Y?L?!){͈ϫJߝM9>6'p خg9NNSbvL(օrZ4E&4hG-1۞\lm+nzQg_b es w؍?]B4ߍ^e\Ľƥ-Jn7חmXۈW _擋* Ykn o5Pr恄1@9"24.j@inYUײ†߸stوjN n-yؼKH1lXlUo4G.c#;rUh,pɃ0V]3ϡWP GO`j. j̫&vML @]ZWfQvT\LuWHڢqjqF(7zϯ5׺J,^qksl_8ic͊˿ZW#Cr!Ts,ÇWs=~sDSE4o(1CM!!wW!`u"чL8JeN,9t%gmb9j>Fzzv+@8pJz VkI4rk2Hs>\mWR+@ 29#PvØb p)NyuXWܮQfڧNIF'ʗYec#GgPS6 )IFepDJI $]DF,"*VB(3]Fy<1'*R 0Lʅ6mgdXkI\1Τ!9R1,3LG  KE| ;ّ* Dͭk{Rg'b@R!,A$< qNitQg BiK6IݩZ~>N:<bF%I$-*qBbrSOQgx@!G'W]:É㹇 |Li(!G4*NCy W2OD4n/䁆!Fm¡)F/Oy>@*n c~r/Kc_!bzdհUD8Y$ &lʙ u09S\u0)-ϙP|r)&Va؋Ǜv/gJS9<4u!^vO^|rL1u1`y@[gL ⒳; ǡ}m~m.P\כhXxKE,pc 2tΉB)>b Z%n98ҏ3HxՖ2ɃLWIi 6 ъR-B\d P o Jusi|DZ%ۉu׈$1.>(BLܩfɣ,/)X I2۪o KA*!\0#GPgQ>0H(dӞx"dB&ex>?H6"%ɸ(P'KQCp4wRp OcnSǁ>71g990otk8W>$ Y:GDJ+A;$%(DB߉߫pp hv }pG?;ZKL sMFqiwoy`0V#R-,-#TffS:L`TPkkkgn5TNXRk-ƕDkNrȥvw> W(A PG %F` Ei:%* XoChSiShUԦ6UfCˠ\m3Vι뛸>OPqyMq1粈|2!JyREx~ozo~Oޏ[t m%0ny^S$da3|r/+VLν*bx1ƹ>?䰏0r~'E.uJi5htW{]jg ?zqe^ZnF;(M:#)h "V=*Wd%} Qџ`"A+rH[mF[-/TL[mvbt6h¦ΰX*0AIdT@UUPJ$*F:2G3xK$J堀> P{ԫ"FPP&SD;Ecf!Y:YF{JV :ͺ-1€9Cq) bTmAٻn$WqQ$a73;;@fIv4$t+-KZoQ[~7et{i.::eT޲\*Թ$Xr fkqC߈98i9Ic{2qf+\J R\1$MD*$՛L.$#TQlK@6PyևӕheMP#Q( dbgTqSk/8Zr-7U;l2<;٨kv?=Ie}UŽbw}/仂Т^ld'u IL=դqMX"YJSCE^rUcq`Cͯ}\^.`uS?wK8ޯpqם٫b/ضb[c+ZMūLiބXyC d)ZǑ/#_Awqj`S0f5h{\sX`0pUq{-r,ع媔s}rx0r* Ah.[;; d4bG6'}urk-KGAT]ֹ\RC%DKk^0)I M> k#xHB9# #MȤd,N'a>R.+D.Cfd]'CQ&[sª"Qdx13>4AzM9MQ() qQTPΒɆm:kb=lFL%I";,iW8XiraLKH',|IL:YP I7 8|S'ձr}:f05+;䬍LL:{ Z[jBg//#)C]TÙ{е/:k/G/yuդ`S svYjL(}.f sl1T}4T5svVq >{dtގg'Ͳ[ظ9XG h<ļo ҝ1+$./F'g*uWTVhAw|Hz<_/;cy zN8wWwxmLmFGLj1nIB&L;rſ/60r1;;AwQYYsPoJM/{ޔ&b Id!{--àPcWi|..y rЄɁ1bFh,oa3m>&2G4HUQHe68} Xmkf0,ecSIW&ʻiHW֛< όXw_^UxCf׳y| 0M~ go~':롔Ĩ) x(щ):߃mS֨ 0Tg]cTZU% 9>[vH#v3qv#vi<n:ڦ1j{ vkIm2k')rIԘ R_?dĞ PKlUu!E4CfːI椳c,X0Y6BT Nuv!4fùSix,l}<mcD="m#6e+*/ B++Q IkdlecF '[ )"z4A7:Ɂud]d%J3Ğv;,=ƈL=>dȸ8[ZYgUT\tqVQ6 k4^eIS+|. 0[\'DٱԪCxέVS!> a Q^k7-~`mnϽ94^K%!]:.u#vFƟl.34͝ p&}wFYJXY?wbWWs"< "Oe4tBs&7v ~\adW̖y2T遀XEk0`ʎ`c&j2,0q0a8OW[z7~(}\CBgy\?gvLO=U?~70!z? [=J>e<[NOa:dn x~3^7Nx7n?]].DnU?i<>kpAw1xu5_~T3/.a2+"̦fh|9~wu1ǿyp[ ;g_mI]3yhoS9Yͤ_Oy~'1E \oa2_?o6N` [/ &R ^h6lu3KNԌDo^7hG^#+XA +*h~?jdZE@4˜x18E*Ҧאy>jouhUQbCMJ(1,*uWPyL,m3ufL#:jً~q/is[ :uWge`fl6~8KWv=w֍ IW `z1DeL+[eپV_m=)lpae}>!U:hKpC&_tQ ]FD &#@o/k_7/(+9;cQh N)tDjmT&{FF L嬛˫du7ZOVG.[j[7]spsLQf$L)`J K<4_BCifiچ"(w! +j g)!3rȎF!X*aoD,8&łK)5*|8/\YJ-CAiXK8*%R@gZ{H_!|4uiH];Xd2H_v'sX${9ER,%ll1j6٧{U]gR>6[ JB })yeh} k C*^y$46($8y(${~˕m%t~MܗadaJyrg8U3̇v40ǚDD6J> śÒ>La>8^SfӶ%ԇ\M9I|;F;$wVr饵w#>Ƀۤ]O^u@bv^"(G$^}6eΨPld#Y]U.%zۻTj$q||c>y,4_鰜N.q^Kr דןF'YL_N?}:P ]3wfɻ岣=9v|ge׃nazR>YaXy6{st~4lOaO̷::P:ǨETT&،i7Yu%!z1BdH\ uXkW`S9i_V{ö%Zd!%uz_jIMR$s-+= 78qf=fw:to×k~fu, 8NcJ2]t,\E:өKdPgz%Kdm2]e!Z!}e 8n ?r7ҿB$fߞᅵDa%o-YUJPe(y6)Fڮ>aGN{\z;Iy.u`,e2X/7?;jJ1N,rX䩷ZMs2g22Fʺti!U* ?:z]˘G+?m(ڲYyp`@Ml`*T4@KO]n_4NJV4ΰj@t`Z{NW ҩ^ ]ycx1U+3^Ъ_3Pj?wCWyt`+3j-Djt7CitGd]n0t>誡մtPt"J!XUP誡uf (񀑮^]B`e1jhZkoOW 呮^ ]ͽԕr,v>Ծ\7bh|m{j(َtJ{ys` UkCW b( txr)\VbO(]\/|Zy[Ma~VUT3)SSOIT#HƆzC*kz͟ ɟJlhws=s̟ϒ?LCLmp>UC{/J=ު~tų̫돮pSW ]-Lm({+xIn;/̻&/X-`*ܑ61= Ϛ+VCW -'.lOW 5#]@?PPtj1uUCgC#]=]119; `& ]5<utP|tx-DWᨫwCk5{J'Gzt`Ul\7bhP{CgwGz2Rz5+#Y `;; NW #]DxŞ^ÜcǚV2f ۪׮ߴ˫q/!尖Y K"R.V=Yf>-5"W_Yu~̩lrT /(8 ͼܠow)05 x;C79iwU͗C3 e7g|QSn0HmF_Tvq,#a E62+mW^8{ϗKH)B;+] |.ū)CͦG~ W#k$jWh[Y},ulT4QSiߖeh|m smt}2GKB Y,ۤچLeeIU+ i뿅uu6ih&qwgoMYRECG䳵pN d#iUrBBNCJLhm)xtEG +q T!fdo+qj (M'ZtcY˗j:(ت*˪ %U^RQGLµry/=!`ИU'75G(:E*$5mʙh3+^fG&SphQ A1E hCW˶Mw$Ûdw$Xa*0@?@ƹ r $`~)\=- + bz]F^mH'a\TnpօD2iFKk2dA>#XǠQd u|bmR6cyJdEUC Z <' RI amQ;QUO1HAYz%y'&Ow7w*u . Ƒ֑%~@_K$BZRKu` bD-Efe/<'Ef]L.R{Kl!YuE`HJA6̹(#P \rCPP*ؒ*B 2.^wOVSՙcI LIlp`P&$8 d>rPAR}rIUPF:X,2 t%vU\| =`"e ˕3V e2a̷ځQR`BB3u R_QJ')EyRuk҂ A;"f#C7!vĠT`*ʦ[C%ǀ:ʩvʒ`5@A*2JB8V APP{TtgB@Q"Ł.Hq`Qg o/Ez@_PQ{  ~" R+Yϫ+43UuA$"O%e2^ZY$$WEeBj)u[e͖e`d,K"ѩV[4,tZxP>R3&iBA7H6~9GDY6bVGrbhV1PTRXC8'xğ{0- |qVt|uqYf69 Ge^ׂ+UP䄨G[5.0Am`-D/bPTx ]{PtdnWBҥ`ruAh)CMd<  V[_W(# H1DN W&DEJôL* mxqLm($n9GM@@8ڂCEUg{USY_<9Y<|qEQ҂dNI.+II|'ӻJa~8c}NïwĝCLEm]LjH-6 ]{ #D |uI6D@l!/zsPgmwuj ЗU@pk ) Zc22{=^P*)Ѵc[ 8A W3K:v&ڜ׽jF4,k+lJ'aNHEcg ٻF%+]`~qqE7^ d(ե nYYq"DbPAg@i4 @Ap2 v1@DhJ|@ 9ye>F/z KX'R% oƚvt!,oAGYOjTRp ͫ)U9uzˬEae*,\K w \̤F@ ffC ~y%]ztRbKhLwRu GRD' OloUaݬIV!ď^+dź"r\N..\1[vKA^g<28Nv(h(cx)U,JG50⓪`b5ѺAZ`pԏA!ssmX)+Gϊ>)fndjv 50֠_85“OLA 8> t$?\,>FrƠ\U8/ZA6tBe_j;J*Xk m@va `93#0̲BgzQlem_>% 7b|d(bOQS:( q8- %!xJ 8ˁtHZ6l!& [å71.ń12 3+Mc 7D.@,9)`@$Ұ56P~vW$4Neg,,B W TôrqlXaUiUM`u)E N^ 8p]'w?6p$rv*ON{xG[ז&F?L.2n4K >hd2?x1-nU2]ܾ]wk榍я-],3.~α>kmGK+]/lZZLͽ/WiʍHG5ocb>Lcb0nJl #ԮꎹN=\_Ϊ@kb5𬛸d@MZ玝 R8 &+d !%"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b}L GV ? RjNLWjFL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&eiI "0Z i4RzFLX 1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@ d)1l:&Ud@z=ţ81^!R@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DL b@"&1 DLzZ7ފFz/Ѵ4LngJY,y/Yn/pwY }{/Gjo|h n,hTk,Jv;߿w~)VV_eT,7I&gػaqq΃EC}1/:N'WEr6 puKb;?klEvE7Kw.&]J%t?.8U3 fyqXїǺhq|8[h+i~A l']RS.4i:T1`g?+V[<vs(UFFUr2j8}NI;o5ЯshϷ:> S*0J[ GƋ+u8R4_pvVi ׇ%:rxB=j }/Wp1TpYK-Қ^O&Oe-0˨/ LG ׼๭I%9|wo_o׿mlV>°w7?xqnps߆c&avh?Wڋ֖{oRJޱgsvw*U+ 9f\J]"t3au&W//$b\5E%.Qh3Ռ:46Z%WTҖʠ!JLqDUŧ7 е>^oNƭq^F6w_:=oCy@A*.j2r[iiՕVJbi0*s2häW2$i2yWWMmtYo=|-{^=%|BxgޭEoUêi逭5@ qlTөm`]N㒮80ΏWXnAmlQɡh^r6ḗexlɅiޅ̍hyJUSVxQPBtBAQb@r0q{!N?}%ﳛ4HmXmΗɛ|ϯK3r0G\YdBRPu: {.\R;&M z]2++k v;deNr@WX^ƽ¼Q˻_Κ`nlǃnl[q}p a®sxz;糑!BτTs<=/UgCmJ yӤ 6\سAq *vlT ti7CW۩ٻ2N*{aՋ(tj*^hݔL&GCfKr|`Pv3{omCKe c?ۖcζ$d!}.QI-U}]2Nk+\m4sy[Sа#s_zkIlG/ζ'햩'V-Eю׳p9j* L :;UlE;h}| &~&~9ӧ5{Dv.(*p$-2^’k\jO}#c^^5}ȦT281bXɦeԎNKs;]YTu9Yv횕wΕ!&p &΁ B3xfpLlcA* tNɺhtFm*'㜇hL~L9wNWUb9vLsLkYf1Ѩd :tx4ꀠP#tJgcG4ǮJjɯz꜆mx`_(7To  OTWS7դGQr7zM9ɼ/Po ^f+뤔S·\jNB)KX_Gtӭ95|[jQE (ߓՉUZ_l 0T֮l4/M]`\U&S}1*IX,v1DѱGXT\S(>.>+>Ŵ\2unSHgU2Ӿ FڊKKR$}(UϲWTUR: x9⏀L֡? c?-K;;͡{JڴNB1l^y!w}ޅtU` |JvIr7(fz%|]+ u z$`); X_8$`qifR1qZX |ߵ<eqVp)ڛAiN>̦Vj~%n:~ЦL ׋=U{`qi2cdY 5xUNEV!0d.57lp\bx΍"Up\5KBddsg g}j6[mQNv~l+'j A\A򅐥[/9 *;N3NNʱT+*lJȑ"GϬ7Q7L&+ЏAIT;S灉xc]0pE})9*.cnU=WpQYV.k)hlf`s0qD_j.>mɕ8piFBg󛷶4x뚰{)}A^-{^‡7=\mbԶBm>KoKOh.Csv7Z}o |?}c8=޳Lqh wv+m=]廛7ʵ>;wCۻ;sCtH0 ]{^a]<^ݪMon=~"uݿv+6H<%d ~XR/`>Wq4 w1ݩRdj(t6Z&bNB W|*#:dNOBZrLUb*B=D9_S%\.jK$>4db)Ar$\s{:x:^|ԉY2 &9e&yi\Zdc=w(f`Dn0OEW'okmH a܇; c%8%MdYCӤ߯zsDjY#yD&g35UGɋc@BbTZ:( 0+ j5r jz=kc7,MNw42\Hb%{mJd,;i3:zlXo ***s4["\>0뜑JpRHoA$IQ( 9_>X5e# ʓ%=)HO@$Ԝ\R9;TfTj/Be,  /.eukyry\E4^? ?O?OH~5!RHѧ(28-aS0CvE++Y8{2&j&oGN3Y4vZ-r#vNԮj@#2?sa0DM\ڨg!1@dW`%P7n&+m| Vq ,CzOQ&CXIبp$>`V"9َSY/X?ED]"T:&J)[b GC@Cb$dE% Hn-,gL 8)emD4xdL'͍L6FidyVFjlGoMu \#YKES̀.SB'u 1A_ 'hiRF/&mLՁ'GS ԁ/ׁ/T3ԁ ; ׁK8noU)B1),49h@xgmuY1_iZ]Z5CߪƠ~LAd"g1P¥@;$$ȡG&b!FHlN$- GV))|Ϟok´nNeO9r˞I$VC(%K.e2I),Ls 'geP p/Pxy}kiѣCπGy$HF"8fBJ Ƭl֮f_0!l@~s]wL9Ye;[t'R3y߬çOxD4(L&H&+DqrUrȍuD. }A0:  =<5"/=$L,ꀀ^3H00ЁL|mѦ$ SyK+q^~ 6h xdPJF7("rJT2"Y,&/WQRkVVX/2Qp(6nevw'Tċ.z :*aܤsӈN")L0-gK1ːK:Jm6OfIkn ~׼Ʒu )BXYLǮ0\kΕ RM*FLLe"'OL/ƧW Oq:X}_C+"ʖ=G$яG'丏:}7!h7 ;QصҊW?Q;˶%$d:M#_v?rr };g2b3%S9zPOOڛ|lΉ{_'|5(Ieh,qr<^N-abtXŋ'%R&064DۘD:>MNLJҷw%bOۿ/ ?I%xL:m3nyr&ZMommͤnOÇ4?tx=҅o8|L27R{MOɄ:( Ƴh6ru6,%!YiYy;CkVvI6PUy|WuL8Ǐ~M;1Qq} ]B8;.fxTu|_$wϡP}ZF,9G#vdO/4Zm3y$|zRuZWJtSy5r+w Θi:=o{ԲSQW_MeN )o y&+W|Jt7{,.gVYکAa5X|&U;6XּSwA!JɰJj֚3w5xѰ[2\6rE<'/rRz1OL.FkDQaEgHE UچJO#,2LڔҖIedYVJsV`uHU1ntqҟ"xc"'څǪG\1⒣T*G\]O E-0yp2AcmB2ŀ k9)L(%p, -!xK<4&!581])9; nдG:YwnPrR.4bO֯GzB D0G JKoΘMs>--b Ko+-_[4s ,,sRi{5j!s-d/ԵL0Q[cx=)uIxdc aDǹ FXU5rvhUM'wHUqHgI9 ꑣ5X՗ߨ([-s/ ̠ww2{bSHtURb<^C$^% xH2+@\DKX(p:ಎѳ#<2RJOy4B 'FNܖzUEM8& m:$z '+Y5rvVl\y|&;,0G=cMzW}CV5..EF?nL4%(JkHu@lR3Dl"3$OKe\Z)*)q J)mB"nm2?KEW@r"K$;KJ D$<0-]~y9KMܯ>,N_tL˅;)[-DPf"#/Zj0S \!%8-2G~@Nj9x:Ye1 S7J dc9xցrH"'AHZ"! L5UY\!za݅-/Fd7Ƌ1R Vs.W\wl`a!Ӳo1 Jd ̍sd,&C1"YT~/.HoR|ۤ L'A#LPMp>i"rmRQ pΦR `L0}ia*b_X`ڄ)bVdD\A-obDH3M~'Oo5 g itչ?"|^p 88)~La~<͇d*`r&++ŗht'x‹|ĨZi '坄Z/k.%—<\Ǽ> /.Wr}?g}B?O?(KH)D$>-D0ӲȅRv6zsvf-U.oN,t'~~י q@ %YxU;'RHՂYIE\B\CEBhR>hLrB` o!Aw#QW=ԕVrJl]!sTtKBJG㟏Y/hxps=y-EW(edžd]ɨm)%+ZZAEWH]u먫JdtP>u嵻 uC]IAGvՕ)EFWk]-gBJ}ԕ3AHW,]!fTtFt]WHiyUuQk q+Jt)kK5D]GWz9!]a犣^X*21+E]QW{mt0YRqWN5(/o@p#|I_0V.ꪇ2yR2 q4Bʮͻ:PMEfp2]q9 ;dbʡ녎LqW+E}-Z2@+z)ecq:[#`/t &_t]Wz+!+\ L1OBJ'=] ^3u]5vm7>U3ڶVkFdtJG]m\h!]pk6õVHu]!QW=v[OHWKW$]!鼮ks+)ie  p=#]!-]RvmYJI^UWBp+5BJ+z+-~݋ -#+ud+o b1G]ARctDZ R*uG]y!k]ٰCoE\|vl^ ^B':k izi`,άc4[ q+5>+˥&+tX*BZ뻮+t&l)tKJ7e]5w>!eǶC3 tez|*BB`e q+uBJoz+%sMFW+,]!]RjuC]It[2B\ũ iM+,ꪏrZ:28j.OXwP&p _'Odt qR֫GoV340 {3~ Npr2 9ܱMm$N}5c3C^wKI1Cl8C mM |>̗ Zv:d |p'!G'Gh0%IpI8_(uq<#GU):cQҲZoV~hPPd >aspU2\{iyr0k1[w W8/=hǿ%Niz-ǧ`-f`}?8d%?q-?B j!2]Ր6s?jմFHBB` q+RqG]ĸ&+VW CڶoFٵ7+/ t|ۮfM:IoyWT{ Hg & _%W\~'q+hQ^/F\If7#}ԕ+J 3sp'ϜDJ壮Yoܶl|8jyf~Qvmx6Е69nVd dtQzu]&ꪇ[ dtmOnF]u6ꪏp \-]!V]R죮 MFW+]!]RZuC]i/tN ie+`AteOk 0W)L݊H-"TdLݫzie燪RXz #+5df 뼮މ>3f5j8דVkD)e܆Y [U`xۍFZQ֧WtBW\նYu?cX 2B\CFWH\uk;mG]DWs!]1tti*ZKM(̚Jr'+WKj2\!h|tZF]QW[8!]!dtm[nFyu流p7&+̑Ʃ i4+F]PWF#(EW|n+ĵdtt]WH]Uuʆk}F]mVI0]oQ.+sN&U{e 5*֏Kl7ؼBtj< XIg4q ηh>hIW눴Fv]WHٵE+/<$A`K'B\G&BZ@Yz>f?f o5vmo5-OjDEK76[E7Е6T3BB` q+m+jDiꡮB`op +}vPZ墮+Mmj4y.4 Y(tG'W><+~[+gTI^YLa"KDl^!T7`^(;3픮ڞ0%>l]|0^tLv56[ n~t}jdډPe,UR:eʢ)9MfeK]{הU*o_&MXxg9Yᡅ\ٺG ԋ Ac? n8)nAY>p[|r-7}wISwzSxt&E:լd5 .S-r}!D~ۣ?޾} .prXTHTԹD) TfVN sq3\/G/SɺZZ(|5)*2]lq?y 'LGïx}y}Grhy߫ߛ4Js*?[݈&RB|?^O/oɶ_7U~SkLtLNONL6d.-\䊩,އ՚g"uRs4˽,)[s]ex8դS0v: IN&ErY,G4,Ms@P8|`<ĕWaEҙ"4sMZj[|Yܟ?O8BZt9GEJ&FP7]_x ¨j2<k׋9~ BynHd6O_ƣ;/!x ֿ?5f#͚:X !UݽT.<ܐ?'n}b-w*G1h$Lr`t2t4@S&pUJ vųNq:xNt6_@]\cSo?4 OW?fZd>kO Π>_ !5Hr0\A] )R e溯IK?NKz৊crUۂf3w6$]N#eWuD>hi8?\kYN\0..&C=~m6sV귥lۅ|pUB oc/G{sY@<**ga008? ]`pp+oE޼O>_ {[잘ѻ?(woERxbQCX84ŧނp'{u@UjXVJcHRHqFm1}T72O׏BPj%ىc%*2ɤHÙ_"RP\\+MP9Q(]ʖ: J/J+ J{) {oVxPkshX)UZ S k\2U<Ş&pt~L"4NAvLUb=D3Wo*e2>+J(3җ.(_ߡ4˜L)]0\Ƈw/2z{&}T'!?bQ3hMKu{o@iOjS|Vd\qHhz^km[Ш)M2oK<.97sOtu#`ݷ䭒MF[ݟ]mޱqmP.M=>f.<JLre*Ȍye6.<͜{ J\ JJ]VR Sr p:+9η#,h~L"ɋOVcu7{W 3>\3g1[r]mo#+xf`r6 ^LE+KZɞb[-[~iI+Sc-wSl6*>U|Ȋg =R1dRUd踰DJMsy07kr$t2ζ[]1 OTtWks!UuXw  7Zg<:X&ٖovzֻ6Kkg خ(k_zG.gM?Z`j<#䑑eSXM,TZˢ6>,U8 |.Sz\FҤe6~y>Ѭv;5Uq^勳bT׵bkct6 sOL:AA“V:a } Ȩ1-֘Ǐk>O|ߘǨF }{ &30Rn&Sel[Yff& &lvz+RVq84\秆7Ͻ5uP[~c2(~b(t8!d."aD29j΀ZB+9SYv聳Ȍ4"՚}d?m(rsQ#JdL iV =v{#)wh&!)63.Q!uzZtL+cC6{zIA+LO.1daGY!7>8SNU;1r c$c&maI$6hC*;)69[4942vB2HNW-k_43B{}h8w38]W5XM͕W@N=ni1CB/0y7u\ѧg@ҁ:L=:nrsN!Րjg nHuL/,#b;d*?~̰e'/id.*@,4C$`%Xha~Y7@|JϜ|Jϖ|)'LN1aL)4F-P/L,aV,L0ii2ݪG,BJt>!Jde:C* yZAe2\֜~Z"aW͒f|GJޒ=}I}9#-K9Ao%8G,Xhk %"'P1yiM%7|15k/~c=t70b㯯.VeʒQ]~4gLN#u42ᖙJ pR30&,E3ne6.8:r;9}mr*z]l9yn*jIrd6gVsvK8b6}Ĩ{/Vh)Zr]1XdYZn?V{g>y8X`9R{8ah楼|}BWEֆ2,w)a.$.2Ct&sLWz6AȺZsJLs~AZrwޒ{pﭷ"L믋ؓn;~~~omjM_ |&oSƙ#{= *؋OC㏤.hT)ɦdmiL0L*`V48 $<sҞ\Ni~B񓠗E!i#1,U3xmGEp9&mt`eBBQ٧DIf?覉pKrx:PA3Jf}"ӈA{z#y46a6 צI q9& .&ϊO<&$$FH%` VknAM-[`iZ;=A>9͓;o.*yt9ocH_۰; _T(EU.h*D(}`9# ᤐnH PV 0VՌښ "`1K.Hh#= }B[I9s>5gհJMZ3P]u! nuA=n^~ZƎCx 4~24/^F##ٝ`+K#i>E攌hI7d>n K;FWCԢ1),49h@xgmuYZn/42Z\N-ܞ_X'y+q@(R 4 1 BDR, dMq pC9ExlZ>&cwVe,h%^*ԩhїGP*GH M1h5hP&),Ls E+lGe.w KQ d.GR#ЌD#ɲA A*)zóAeQU7`RfȱV8) ZyArGW@M"{>NH_9Ô٠@2qkku؝ME%X{~yfGXr# jubA2MT"7֑$@kAat(SGW8^3hצyHb륧_ٚHĢAkf : m@iܠyh}y!c8{f^#lp0$Ɇ61IP*)Et9eDB,&'B RXث }+wE?jv5Wږ8n:j("]UЍ.+W&砘FtLd2XfdU,C.R+5~58ƪ ?ĵE M]a + R v1{?0YHx7ze={x }e9.\JZ:fzD>~2>upJV'nv cq2NӫҊwV2s‘/?y~Q{-vO~6ߊ~eF/bgq/̼sp=GX؉H%Z?L%׋wT<*y*屦(jlɇ/wIo/_Ԃ?4G4mmҺ^zS9e/"3ˋofQ Zʒ|HFD暦3dxs;Sv#uҕ3M`둍A]#7 +[G̪Wϝn AiD O :zDAw q<&e#w\d3}"'-` .`dFiKdU<:Uc @ WJ(B0YkS2J[&%7,s&u΢R* C^Wf`eNTwqhT8uq}P*IC3E^Kk5TP*xqwfVv&8B + cO$G~ m\9%BJ"D/e.)Y]ܑ8+@=3;S3]_U׃ZM iԾ6$!Am옣RP q@sk'!g]I49~ŇynzFW`<[&95 'D򰵂$,CXCmѾLʅf9kUIb ipX=p ZO'.xRϱ)c)a08p'3S`s\Ve/|[tiyvbOK[gHo_8~qS^(9.v*Zn_NN?D0BxOmiݍ\ _wQkvݥn7* &32P(kwņrk\b__<_Y䃾[|㟫Tcm:zxmKs!{/^7Z9vaZt |/z:[^O}Z-k2;gR>̾2|~?cQ{P&H;|6n}{}zNlet_V?|WyҞ STLZX#obg}LN7j)>]/S5$v J:(9U Db@b!b2sKnP!PCAFA,'+SP*htDm2C-FUS rBʕAŅmjF^QY2FE-[6 GaSgDtMhL 0 %$w0aЁ$xC!T1ВAK[;|U?*(Mu3M#:|k'[3}u.(b4~P7T!,grW f\O0jk 2 @ !%Y`KR:*yb+YA6ݠL<"nLowBO?vm-k̀^D)D*%MȬ Ҧ]ضyV>;Oړxέ? Fk m1*U9%9A XĈTo&1U`<1 xA)jNrs|t[qQրUrGN%@glejLOkjRecXdt/w8bQM‰%5Q\WZhR1{!sc8cݘf!X:)AͰxy\ty3z Sn#ƉƷG仞^uoP;B竼,j}Bcm3%²8nJ@I@y-O"E\R*E]%oQɓ 7S.t*UC evx.F^jSA&_ 88@NǪL?GIam/ sZw!sbWx]>B +-[TY>0ڒ;y(;v(@& QZ4M;%pWu[TwnZ'B|a*,[+4-btPժ!䙌f Ӟ8),.j ALg j8Nuo:/oOgI#Ly3GBkg'H0WO%A+ѭַ K1Y|m}KrJbK St.z=[\ +XP },wѨԲ8'g@~earqBuJ#: \>t8?<'v`p%o"tldRXk7ohkROo {%ܼr7a[[[{ }ǀo`ݜua\Gggޢ͊}u7w_xA@%gK4Z/=BREɠC.}^q6q9ws]^Nw2M/YKh߿ٙL{ lv{'߽i7-rm13?y?߀iq;X߮>v53aDFt;o\g2a_` ݰ=OP^0^t~a69vǿ$&:⻘l쪩љ }<%q)!QQ [RRp5`XբYgJ)TbU6=([=&ҰqQgq} !2$Đ E2%emhBE8.89wWzyLa_ޮG]yA+? I~S0q_zvGC<4 ?``*U|mp3[+jGb:DeD ~&FUؠ*@2E lڨcڳKYm0-fo- G,BU44Yq-I]L`tZeH H)9uJPu0rkzDM Y-6#EZ i=NN8s>3/u1AG$jQFߟBQj_:7jR/OuE/QkUO央 8uG/l%v&5:\5`U\őS-+BNЫMw,b,7Mq45S1V5HxM"תU֤,ڇTqE&N<.XO~t !X:)AͰx|Lv!9OV| {  Sn#4B*0ÏgSRЁڹ:_eQk3BmRd)e0>kI@y-O"E\R*E]%C FgI]aM9֪ 9yq MR N Y*$p $ J"nTz])ĪFɫ r\ν%9Ԩ)M6qp Eo5QbSkK\k~b^||qÊwri<;9,:B[vVUTrܳn!YSڞZN5:r5T"u땆:!3AL)_-ޱlxƀFS "RH7h5PkLhN̹qA (< &ĔU\F7E2mpk͑wA'SNj`nF@Sg0J*kZi,3?g+7gt?mُ&hT 7~~o-'LB"`(j 5'TM@٩17UepFx7փ!rcJL ʥ(xD{5sver2 oC 18u2> RؠAМ="sd6ipvfjzRg!H))h|NT(sFGa:|p[nE9pf2Rr-7ם@tH)>p. /< SS1!FL@ɇ-鲊8oUQ`:U&TI$1`<:LV .mJ=*u=){;!PnzYgﺦa/% hVcs2O>S:PX82t?(f RWB6P@n,s%54bTH.GhzM|2C:2֔"cbKtfk\)X,)9H;'9dA&ϲ )xJ32 RUΣfˀJ v.z,;ǷE~9QHOǯ3vbv󤝃c .Ԫ޳|d>q>.dD_dX@R]f|Mo3o~#q]gzC:b2y&?Lq(s%O|g8roX=Jj/Z4dYOagxf-N_-zna@[G@DCOetn.:_|Xys+-, ذvz7i~my`~w3? D/X\>su=}ǟ:/o0(Rj{?Ƴꍃ ` ߏxhd.Fo\`|}QeE\M/G58F&$,6(h g *ӽ{b/Wr꣑^q?Mݻ;4%ƺn&)"F֡gQ1єo@ s <ػ8q[z,h<Жx&ć'ˑka=>BXk$3J{Mʈ@q-TŇ&%bZe =rRErD`hMoetVymЕa5sLoSdj*,WށSEC1*hodV*.mޓ'yǷ̾<|cl>~8˛]'<ҠS&hg lz6UfhȞ jgoc)ʓ`QJ:&6NrhbvFbAлv>ʊe? ,},BhE v~ SXvZMnoB[җT N1eGfN jyɾatILf VFqX@(y-%dC"a=]_}|2h>75roUT-_zU{hjzguƮ]ryCopl;=.xo6_ov3J%^%;j$oC֞tqno#6 fKWhK~Fr\[w*nu{u3o8dzm~l/H-j,zx\yxj{\Na:lX{ާS@o*? a{w箷,;_6wy[GkQjI_/~Յk}⺛3mϓi.Gδ8nUo_@]@71@J)I J}H Wxr 6'E!c3>d[ ;3/;ëuT0:G%r<a`GgBG 0S&,$iS"3V0=ن2JYSLNxr#W0^%\˜ݎΧPiz?,L}~g{fQɃ˶L[@=bLz/(aSj4x M&nG# &2Y0X9bW3gwqC1O%jWFmU9j>jv=2跍sa*hL\bPq3y!1dSc%NYiGŁ2!C&LbMLH&H18,Nƹ9aK%%0JD?rD}D#2D*Rԁ"j^t$󉑧s}F()%9P0,D`FD1mqR p d:indI ib#b5sX7)ѫgʥSuV%ES9.>.qq+7NLĕ'1AҤdJD,10Zx!*wiWU渭~|G_gbvE~nR Iq$fiB,b" 7nwX6•KYr\cmA"O⼢hs<%l!Ř!DR{&8%[5k_Ԛ%G׭3V "9cd:0t.3[5T3g"ϩwN~i*nҵoWQZ۾fr-"ʈ mcj^JӋqYסE?jvM6.7^ċ/=X0n9;"f)@& A,ZZ!iӁt7<@&oY℥Um[[VKTO/j0x96D> /j@MFib]i9(Uz`lހ<}hpI}y_1qc`႞cmmKW72/%!4.?8pev_snQkmX]${c&fKq|==|9"E,"pԩ7S Kc0%cv7g ))'ulnҐst{uk}_{_U?h>~?Z4]U'88;ν@O~ibs,^bLҼ]}e)ԷLEדQOjxQW Pr_}}>kHjGѬ[֍H~7b)A]N`||U$ױ{…6۬?~m4x୉*"P4hj%z<,xCumy5 j4++XOX\⇥_<+N6\)i6Kh._dT1p͍{RG+aN-?,'Xܴ)K>[:կ>YSɷm,eںfjǧ YMlT%W$xPvy7Rtgh:~]"-Q R,75i3ݱzTMfuk~?lXU}~{N6KU.fJ3A-o7ns]J:ΙE;\d3>{QKʪsrEfC%)/IN'=[dH`)E2l!& p&9GI:Bg'H@&79[&ݑnl-opSoJ5c#e^k\2Ll$`5>PhUM%#*I=?KCc4zW)7Ew.mc`~8V߭v[M.wXMֶfiG{s㷮 "Я%66\#1-f=ӚH G/.ԺnV{AbEh}V7tw#ڒbI?#6|ðZWXrE[ƴNk$"4E&$튰jg-qrԚ )X:b+JE#XD dzU[g⬟3B{4vSME{=QQZˀ?<QLe/v*Bp& wh5C_tIWճ78}R5|1}M`g\7Ch($.3MZI+|&d7)h]4&!Y} "lk7YW6ɔC={>#]8@ufXr)[eKok!5mɋny> ENzo_jb'3?4=loZ)޼M&ثPn\j|у6> 'w7#COBpyv)yw:X&\8FmOt wY,\h#C`$%s] Γ)A*g%NmP5׺ŝ{"'WG6P>2_|Nhe.f L B *ŬrDKK4] ^ AhĿڨƸqU^*rr(ʈj́s,:V'6++=DX}-.5u-Nw)l{ (c~#P Vz'$^|6,{ 6X-bƧVa :Ӟ"gؔk CghlrMX8RXe| ccK`Q. n<0p-)sHSpiS5 0QUI=L0v%FE˃(ڰ¤31 Cl(khPVVb0*Q3Ie2S bMki ԭxÔJ& †XЀHM;9UCK0MXv)lA? +XtLf+ Ϛ+"$dlP`&UP+CRqץ]liRX. d^f@uWxR͠j`o(ǐ?g0c)S~ w)l;6 1XrF.\6C1IA1δ"@Y2`z$]D8B{Da*3h(jբ2+ETUӞU;:l aR@AOU@@jMte\ZKZVuD"F瘌%IX^O"+PH +9\QCqBIѥ^V $T-NP>DDْ1Jۉ L2V5jGun&%YQIՈrٲX̉sBs}L LEg@?OusyZ+_iR%6qX6P Gg2 ,Ԏ*һuTg%=IޤB^bQ U+Iy..Դ  Ap!"ȄOK WDX+Q##Rvauu'#x!$[t_neAq <qQɂNe~x}2H";*N@m3B\0N +:ڽܮw*bw?3R: ^{@#Hyh4A ./ w 肸6&!(۽CXS/mlcgW+}b? /c\MXMʢԋ9AR%J*Vx7bTe,'2DF2ZuIBvP āq x!.AexF .;ĥ@\ D 0@+W"iwwERzʙ! >wEZ}({H]FwW e?iͳO <7xP~!QfS(*>p_@/#P\t<>Dwp_Y)BB|4srz[eݗwRoT_@~?s^e,b0聺T"AjH|/ Kv5c=ؠ Fzh])Y)wUa>\ptw1W;u"K!lB)y@rNNWpť9a+Ynk.\_Toczv< u ȏj?SC>~dwK#$,WLRI8kdh|oBר6dp|\iFJȆX;E_SNte9jL1d9h71]Q_EJ}aqH)h]VNދj4kɇn9`|A/Ks_Z|b:E@za8C3˂f'oo{ztn}e!|}kn1e _ţAX.mˏ\ɖ[_C:;>Ekc'gOONC.a)L ;%e!yR gɔDNd} ғa #@}}1Tdߌo@"h)ZReLN59 OEYpb<`l(bicCl5n^)tr'/ 4(^Dwg~CQb5w*I:97p+ {4w}V o#5\s|c8 (j}ChxXF3ĘU4Rph=kӀ(Y^E [=Z[/5 pb%ச# ť^hU$"* a2t,qG׽>mW[Ȟ=޲,׮.tQR`lA^Hp&\Cj46V ch@V HQﱹwpDžBAq߶i[,o ^'c}kPgLɐJ:{Ј:)u)^OVjњQπkH!lҫ`z&[,T & AbT(F5W7w9i=Ms61&'OBc-4Z#bsaM76mLr_[1yD5-YUt֡E5z0o%~ᐇHd’QM'l|-MSU~P}8EsN%owҰtfcm? ?7C/ M>q芦0~q.j%w/v&츽{ik H1g;kӣs|BGĄo!/|{3o[6s:8݂iH3nո[p6H݁+L֏x򓏂]oZcҵЭ =|h6㟓9 0CL[|CCLIHkDƌk@KUقnD=3{!gn殑A57CiT>wT箽¹k1wmtgׇgFi:tYh)kaSfLJUX=Rnzz=eFj|Zt޼5"JE*m4勦ScETz2UtW)ONMPй.\0{pzB鵋JϔR U>FwJ)l5md>T{UžAUc>Jj6mz)DB )XaDBqtLPU[+6@VE6ŹxN[(2K:t;/Gs?Ty=<|;œEqf$8s )ƙP+py|0|!atOpy1հ4OoaJ r*t\П9q|ľ5L9^*urv ~?r%;KQ9k*+[Qm4-ܪ5cIǷ{W=נOvH[햴e f7[n¼L V<<1 [^i”B[m\f +C&L=0%3>[٤LB!KU:D4RKj`gSU'[&CnycMF"*Su 8l,U^[ AXY UM*9h܏.ܧIk+BNe Fġ]|*[,gTAe6$d=4/![X sI (lAY`:g3S&V{2e'to1nA7Gc+\i"YrB30AM/ d";l8Bm>hG5|]|nc6=|oCb{-:s6A9)[Z}hgyr ګZi-hk=u |@$?Ky|+`>n6ch74 1T#bڸ꽦gnuU=hYPR&gcND=94+mhV5 G 4S!HY IvMA U2r"T]ARjYz)o3ќ]ՍysR" iQQe_ZT5&pRU>q6eɎ f|MloEZ)HB>MZnlt\%BZRjndpcptKe=8GJC$Ő9Ir :+@Bɽ̞I=4ч` UU hLN(I^>{JӫQ[<\.@qŊeqE/SQdRD!J¹Xb$R@9@*jZIt(3_վJ۵=e/\H|]giAw}(UхGT|j(**x, Z@QҨ@Xh8 XWRo7F\#͘jdJ)Q %* #ya{g_נ5^2{ȅ18R@CBr^[P.$c&IɧU`AXzLFt&>w*hW#gOڭ'-'c(g724:`'b<`47iL)kf9s0{mR$Ɍ8 T>+(0y!&I JX+AfcZfjֆ[fM86]M[}7:D6J>r ?h䷸o']%)An+Nkr*J]]vvT4|X*?XN "h*DY2*sF(m . `98AD2V %X5$+@$(O,X'E2 5JL UGNjJ5,2BY 倅G5EE2rٞ#37]L_6hx<};>hD$l@(:2yN-aS07'u Y% CQHe".xMM2h;4t;̥KiR3.B"--dIfe&6Yh8K͐8hNNw +oQTcPɓt^^1Ar] $ ˄8IxR,I4ɚ3+ɹC1MGn7+ۃ./vv{Je?;Bi{l2R|hP$8[  mъ8(Pr r/Jz&! VUh[9e}嵅/F.vKU G񗋓ε'ӷ'c˳.cnﰯ9)ASNGv cr~2?ГZwQl<ѳg?0$Mtilh1u"Q3!,'o/go~71[nߞ?le&3l7,f|h9?tSs,e=lB{b~|5-៧7m6Rk^S2᭎C94l*N, 4:DlӁƶ2c4x^uJq:>|wrG hO.Їs/gwt<*y|L|-w[v -%XJԂi#IeV.D>{Neo8DV0^Q{dj!Klq-̱}#:6-㖁xzd:U-uլ;{f>9ޛMIV[k{yXwFmp?z,|L2cL.>y5\1"jXJKС/=[t.9"sY^hK#OH.pFkVẓEa~3$y U:JO\0YkS2J[& DYVUC:WV`̯yZ\^'84$o]XCo)4D^?LcyIQpm7e `;I F+lB@3=e]2)r-kD$ZeDIIJRQa$.M IEDP d:D,~|JYB8Yd(TRIcکgM{ĺ+BYlhsO/g1m1nθeޖ;9pREc^ !dDɍ Lav\Ą,1- nH*X -1jqg3 3Kil. hD$mT`ȶv)j|*3MbE|̒K$l4ȕef!@c4PMP)6Vټ >Ϲ ta.$ѧ餱Q3q\FE 2V+ . bZzW ZK E4J*>#,K9NoP ,d52%/E Hᐇy?4l@5Tztwiw.>!E7xNhD"-3Rc4,^pw&HIzBXEU98:P9N@;WjZmr*z] BRUSj.Fn(g}|f,vYvyq^|1zi$3,3O^ ?=Y &Ϟ<+_/3 EIM#ݝN)1p痳t7roFrǥvu~7*ZC҂L3AB!|nYh''KW^Eڳ~PbHFCumtVGtÑ6m-6d|yxS3rD)%;dzߏP}8{%Mu_h5M/gRͦogN'2\~U^WKDkM6`h82s&` ]}0twKB-{,(#1vI\(5+ɪjwf=H+c߶U# X,fcg_6ye׵RUyz7"UGvwJdttKG1xcU]]o uбۦ:mzն{VW=am]6^7=SF_STеNkͩC>JF5<]uݧZ?3iIUJHf^^,ꈺH]~|yfǜ tn|qܓE{emyO֤A>9r¼o%鹃f60}v;>k~qꋿ_}l|wѕϗy~~wᄐmߢ̓[G_mPa=J3+c].v{m~m~k888 s`lfW.zڟH7@Ti0   ]pF]1)/wh ]1p1bh$uŔ6]PW ]d\)bLc uС ]1/]nPZVuŔj>rh߼su"`NZ)bZrSϣ)<6V^+6ZNtŸiߺ4^WLJW,uM=WSwl?ZWm})) }ME=j3c8M)nf*w/-6"o_/߼*a=m9X\.OW+NO>1~F&D7[h4j^}ݰwGߖehY+}po\Ap^t4*.jWEo05vU ܒ˯tgrl馮)W;xEt]ܫ2Z=%6?=.g9`1A'ECAə<ĸZh<ߢaJE3MWI}+ƍ^}s2CsU P8 <ĸ.Hӆ'-_YoU@UwS7pq*N4JhMЕ-5uTZEAb`gq}+ .w]xעD芀 ȉ)bsQZEW3ԕNE'HW0/] b+)1]PW+ FW]j[4JEW3ԕW 2XIL]1E)bZ}W;SƢY*&<.TIPi j/Eô6CL exk-@-1F'EWDLLiKu:e銁- i]WLicW+2j履8Vnxx+6Q]aj׬ڹ銀 btŸi+ j2:+VNtŸNLtŴrQ*1 ]10z1bic]WDIM좫 @ $;q+ҕ9Vxfp3v(DWsԕz+HW Pg^WD4]QW|;\}1WV_d}`՟hA+9'&?ORΧ 6X+iR5Z13kuŔsU 84\9#UbJ]->+tz]GXWIhԴJi>Ua!AWj׬׀.yI~5ii]WLj2NbǍ銀]1 RtŴ.)Q]PWY+Ƌゕ+>w]1e0EW380b"tŸSO2ʹh+|4\9r[Ztu]y42HQPcqA *w]QxTi RWр|Lsx–KIygR)G4̏J*8ˣXAG$r%%V;A-F9ńJLi ޢaJ(-9hG%u0'FW{M%_+.w]1e"*88%FWudژ}tEQ뢫FWqˬ mTzr]tp>1hc%d-&*]Q/HW 1 MEH">bJ(̗]A9Z9bZ)f+#ߗ)"`lbJ(}Wsԕ88ʉj5rS6 ,QA"`(FWČ 2-fd芮樫hƧ7}`@Z|Rx.a~0ݕU%bO ]s>,)ZUV9S(1="-4Tڔ Z10LB-25M0)I ]Wb:`f?y)]uE- H芚%DWDkMѕ2օI>N$\joO4Z74:AWj׬銀Aؗ]1)+ %EW։:-EWL>w]1eEW3ԕ芀+E/EWo1z]11EW3(tQNcpdZrSw5G]9@%5 ;9]팋A󏮘Ҕ9; ҕwO%FWkb+urQbf]HW邹/\ <֌t++.2<_(@S(88]|AL]}mk:e; (իm5mLjzgz;5j [1>㢘ZicDZZ>F#*YuN Zq WEC }VWwSU)vʔ*o಺j>oO?Q^U*Zx.BmkTց u|?;?_{ڝ'eysяU\v\9p_ 7;Vڧ.nԛeCkOo!ŏ&WuO黻>}gϿ,uw^/ٵ'";}STUk|oľlPǞB{i{]}<^^H.W}Xp?6Wӻ|3^ ֪PuQ˪hTA9B+vj QǦs}womWF7Ά׮Uh Q5zy^>.y>לѷ]wr kۍ{*alBQuvIMJm)kv Ŕ˧Tj,n+GnƏ;ɫq@zp'nq\7{WP+{G *U%C]: = 1Z^۪6{5HR_mo!tMh[>֦6p]5qhIR6p^6bpo%cWž]Ņo{"N5G'#mݻE\=usu~\NO/~?]rchjwlPktCցCUtiXA }wګm==j*[|hizl<Z4ž p {6p^6v}qގX{ocWk=[k]S;/ZkmݸƇN8Si{z/-d] Xٻ8W$w@`\r8[(.C$yj|R8Gٴ;zyf{eQ<!-KP`5] 98Fm}nZR bIo٦cA㡈FOxWg #G' 9Z;|ZwA[#%D2!k,%[cb"hc m1Y8ӐH(&/JdgeѭyF9^ zL:bZ.>ڈJpQLbCqK|ZIFgɥjpA ,{_r^*:ya<_®n?󝼝j_c{4聹t?#GST+_ZY nUV_ Ejpՙվ56~z}Q[{1P'+;0z:=)VEdDws.$3$qI W֜v7W,g0@hC0[+J~8`4+lWQvxN#=٭;oӰ6yH>S>؞rZ80?41.]}ȣjg\z!flQӿ۷Ĉ)ίWY˙(ۥoYS085r8G'UG^NoXyc>] âl3}s;]\'p`2/ }$BLb&F*uܿ~jE\!B{ fc+h>Za\md[=ᣜ'vgx|Mr<#'֞ɫ930UI?&j{ޜt\эj٘bG$`7%J C){}ڂ&xRdPrt=T\lozi…v wǁH9g< hȽS0SB$clj6tƘ+!nZRE2%]VkK۷oߝR!po4Dݍ 0߲/3JAN"<½vG,2ǀd4E4c7H.[& L8L{B%I Ksm#`#~X] ݕ[G00c>[oCB%  ",: -l4"Q`g6yښK ]zCm؄y1| |.cD072:KQ .nak@3^g) ar00ۿi ] 2YeVvb q^-J֨   d̆L9:n~6B|EiFՕnÁvCRB^JÇ^KC ^^!7 2Tz+Dt&%W HwϨT`=@GA&*zxb#0A=eh9:'8ha :Y(Y!GPɛ#;Հ C gMP81 Ԅt4RDb"TuIcpA|WjcNb$SnS#ڷgރʣ[V}zEFyx!Jk-MMtrLCZ:qԋQ!s!3=?mnD*SkCz 6.Ir[2xxPB!.XMKsQ`̍QIY6\cAP$  rDrh\Oʛa?6/pr:Ք۟Cs!Z)&u-9cl`ٍoZuqٽ^;h G.ŋZdYo^9'cg=䫓`Z3LYY$nH]v%,f={?k^6x/xf6px<]7T^)s߇lCEBijל>wIۼ8}޽ y hI۬} v4Ԑuz{k#ҍu0Kk 9 'gOp;ofƧMuL_N+=gG}yz;u~f<'ח;ƁZck PoRemFD{kLB,<۳=<5Nt+~ɤzAF M{6bŏY;)< n:tc6xOF/? mM&*2YwaI( f8_ƞ;؆a)ԧ_,o?w$dZ}A!>-68PF.ėqMA??>Xo~0DߩV;z/|"q s'C8mwxlR<}I&Ȥ'/WM|1SK64⿓+&>1P)ڻɇUDȼ.f/O}eHx˨o.lxğo?3,mqǫS_W+oOVx̕c/X{t>o/v.8jKꕜWUߧG֞o4߬p~w8ͣ/=}b9힡BU~t c/\>n?:G+}b-'ӯ~0U_|s3q;c/@G{4/czɕW<%&dNp󏯦؋nF/n3_ ?xhf17xR`5P"P/4x4XBāV{7Oa?.=/XԶ`tڸXO. .7~@ϼE"}J='q^_.e@G-`C,Wy"%Ns6|s{=dr~#oV+녃_^q2&e||D9_(PFH9)#`r0RFH9)#`r0RFH9)#`r0RFH9)#`r0RFH9)#`r0RFH9)#`o7iѯ)#ܽ}_MFhBFh-`lfs0.g},?7#X}M|c}P¾f,lª?)s*U#ZtG?qf1TOΣխ2<8 TH!rUcs؜/踖ZodL%U7yX Ċ՚s`j: VF6-61$?FOhU!C ]xb ,dw6ց-$|TFa C.ƅ[ R9O 78"d)舕`mlЖk[ V8"F#WZ([\= 32?4YN1Y`a&g'ͰYEEBޕr*]OsћV4OYôr6鷫!x  ,╁E+}v`f<;g2~=!D!zCB=!D!zCB=!D!zCB=!D!zCB=!D!zCB=!D!zC7AΫ3/xggy?o/ e-( ̔(L kiпê*jGx8p K@vCI |;}ߜn'#Bޏ~*Ho}vZȃv/@^{>Pۤk?Fb!}}1|uo [AJ=^^nyÎr{n46tr wF |l~|\}a9[DQGF> QbFs߻%|g,i9壤8f z"R0MbWFr%_kq`ܲva-1%[k_ lz50Dlu `o%m{2py}}6ge5ilԿ]W-uJ DZ l.P@(^X!Z+-a>4F>\/cۂ_-cŹE/\Nq2^Ç /18On-^)ajFP-VV1VZsNJwWd#F~:P>9TW04ήE#v ܪF}lۛyŞ/_S`'Mle.{R[K!Bx'娓[Ugp 3Zszw0WW7e]5=1uaNƾ¼|SRYlMx_5pAZ/WǦ; bLc.1&1A bLcĘ 1&1A bLcĘ 1&1A bLcĘ 1&1A bLcĘ 1&1A bLcĘ 1&1A bLco1ص^7~Zs¡xsG]83ƂJ`r 1U\$ԭ0h-"1@kԶ>jnڏBrfГBLNkSh(57  /lK}h$[Bvۖ9XEmB90Y f]8U5`|ŀjy0 k^'~ܾlZg"CeVRz㞉xф/ÜS2Y?ϻx2osQxd=c_` Ia.dlAz nMfӸܠp w? #ao?8%TSӂUO|䘆^^mY}my j?2,s~ 儖%UnGtJ0C2*;'rЬ4ag?Q%.1^kĿ, ^⬹4kTށfVeNc{`(lyׄ bW1$C$?!F ~}5CR5 ~믻 hj}5[k|~@+^ ו&ԹM${@;%&o.ȝM]Խ5vW~0&c7<Z= K7һT*r1{J6PYHG5_ယJd׷|-u·<,~cz.*Srz>зC}|eZ(l_B[ CG Z;!D m~9O\glΘ14`dcd>Ny~ࠫl@Z'Punj'S=GVeŹ1-0˲kְ%R?*&BtyWPFtLCjO5t@H]$2.]: )k5}v5ziupS\UtUiT ff01Nbj :xu^q.oc{ j3f:o<;!} !(WIq&F`0[FHfyV9γn^LNP>Tx-!S@):ߧ(|,)õnt!ȼ*̳$ GޏFyilN;Qh ԜwC _,/,A/RQ09UR,3O&0 1`=9:AFR=U+\Dvj~<ߟ=tČ!TB,ݒ5(\B*`%Ldiᨲp3wۺTj2to<~E^ɡ!\<_ޯMgZk2[}11a^  Ƶu96]Dv;^φ2&! DDlL+ Y2Fm✞HӣX)c+?!̎jr쐕g|^m.ĠHY} r)ptH gq6qVTw6$=q@Zu:K1mWB3#j;pq@9-o\J6\b2r坅AJ!F\9-"䁹h突5Z90j5xWzeF ΓJb vWҁÕV^d}FȼKmeS&xl8ۗ;g  <5&\qf`n}L ^JmHL ZùDz%YpH_Dg_'[mM#$.V+1Yژom\ !]5i"}󉄑M.uFi8Iѥƈl+٪8Eec=s%ʔa:1G&ӷL ciISѠ%$sYi E,]&@9̀dW1QM&c^]΃[tB;IY۷XfWl0zˏ9ܰ:(v|pKV\&"X~[w  2}ߜn'ݎ_zC¿r<zW:|xUF, Бv;AaP;*K*eӥp :O:ƍʶ^e$MȜl˹9kAW[C{ytl/"?Þr6 ܭ}݇[}ɧINmꕸ-znH+ ٚeZ?ƒeɣ.܆+Z{wNo}!zϛU`7:J4ĹhtvN;|'UT=-$WUnmDM Lrޔ\T f iΘeShRNjVG5rԀ@9jf2 Tq]XN"(ꔒʢ3;#KJ3"`$j5gb]B>\^/ktn%,!ct&0[ ⦮Vr3WfוW_H_jJ*^_gTU n[IFuK JRUOlhQUjK}UTHoYo5S,ꨢ6Lcن A5ͥR1\eLUUU5RjA5ΉCxTt,1z,"i(ϕUjٯ lRZ:޵ 5Y%YCdL{rS TlorlBj_MMlѯ%&ֶCϱZo0Ǧ:v˭d]2q;*XǦU$n#8&V5,XJa |)X=߅Tx3 >6*|zZS꯽_^I(d2FIh.*mHZ'0 gW"{+3B{LdivS_}W? 7⇼ʹ\ A 9KO}|Z89TݮP cd*N#I+lѤࢷo_I^8\Ns)n] -K~;7'o1fne9 #(76[1I@ 46hR ߢ"b#y!B5HꝽ^49e%*9뽫H`dNh A>$}@2ȣm?9]l#|]}#[{4YL_蛗 ]p[E s dQTw%!Ke; Xע`v̫<uGf/Rpܡ0fi!v=QjVO$qE4jO2W+{Z5'Yk,FIV:K 硏^czWljUVsj\Abjh ({Wܕb3j;{vV:9[+eI>+6 \JUŹUrpv dzWlT7pUղ/psj%Woib 䮻j.&wUgϮȁb7WϷܕhߤ0*p uH~I?/v\x6& 'o's.HBZK|4bH>Ϲ0%-)mu)u.fF ^:b}Ư;3~/g=dTjx}ɪѸ"Рwq>" =Y'œ=`:{T2ZKQ&Pp!"s&懮GM5XCo"jD4cWf=?Vay-F4Fw NH/_Ŝ'Pg^yG{G>8WlZRl5zfk>V.u0v` c{UEP ֮7pU} u;\U+a()=p|~̓ 5O3E4Ӭ='Y J pu裗RKzWl0JWJzpҭ?\U \(\_ZZvWNKwWW@+߷p1Yfk𻪱\_M.0Z[kdAeb㝵Ag$J~;|+%R!Dž'>QB~D~|epv{z~GIlA:/ZdBxh#-HX6ALɌ)(d Jh B颒KQYrL02T[IP|T$ Q]A HDA:km, U̜ޏQta/i73w?JxN ?'FwN}6yzz|HO:A1!4o|0\hq8)8.NJ[ؑ8dK Q&9(cY "N e%ʅs6I&E6CWi1NT;8M/ #E>lGzк;|15=ztO pR:9p+Bk uFo$zw&`ȑBZ5zptJ;9/}Vmv,Cl^@Q+NU{pg<t<-#כ`h 1ETn+DZ Ud߾ܚLd뛘JhPBIX OTC!r<ȑ]SKx-G(K2rJ2B 3 3O'M)di(IUv; 9k*` VS@BmDHəߞ/L w ;杝g@Y)Q*JY%%Zu,NzQ8U!?+Y;\*=_? qsMOr-+-<='#tjଭ<^Φq9Aj|.urjW ޶ϓS2Wh!WT|B9v>ˊױɶ]d&gmbkY;d.U-l(֝}?&J_r՜_yG;/Lx6bEyUJ7/i)] hiEkĿ,nG- NTGU?T3f9*YԢVT탯W!b7ʌžuQ1vz"!x)0Ӛ|۽,fƳ%o$=Nx7n֟omv+4i4|dyxKmVfcvj]35~zOfc\>G7󣷟olIٞw=!bm3䋝{y㡦r6o53w8/2# <&546d^"7G6񶒇2m'ʴ9j3m>ɧj'F*Ʒ_C 84㒍Y56(՘`"VQpGJ3Os+@y>}dEu_ ( JA DNJ\Y`lp0$os*;CWf?qt3C.I 3;N)h4QZohIFڜIxmT L*d)eQƌĀ"e-J(c;jgvaVb3-ygM}Ow6NdŞדvyS K==٫m >kD-|.8ʛ - *Y2V+R5u9, y(Y+s5 l%X&'-) }a;3g72vU:FơXc!XxT,|s2&3=67~@㯮_FWɨq5C)ȨbTL)8 M;J*ȷ{ (d%S8c(. zGiWF0I+!湠vgPԦQ i?.I))jK! {!k8%Oh;B,I!QeX`+2ب82F >BID]̜x/y:\\q("Q8 o1C.[@ZV31$! tɾ/e"%Qˆ8E)"ӫ rl tEA(g";ZmIriֵ#bgFį3?8[V.SQr(.q 8B,kCe_U3%ylEk)s``cD]Cv0<<uX݋eWU渭n~DVޣwPFE3=|x=%ק{}MqQ\cN)I@&h@FdK&CҢ|<`;):,9Qؐ\0^}Ө*x:>ڗ'Q1/`/S1j UAJ$M BKC)*u,/9VABb%/PF򈗑]#% Et$A!bcS`1%4$SmtΩ@gZ8B1%8MpoM> `THя߯j(kDjY#y[f3=]UzBP({STn0A|6p9~稻-9"zWtbwQmnxێmSGxD%h !HA8*ču* >ڔxP)$SLgz:Yy)"SFk":$Hf!@:a 8OCp1WW=sOa&㑒 K@ݠTVeRҕB1!`1;'@ #P[b݆4/ ښI=u=Y /`D>X }*ShT* äp8ɴUIXƿIkr(x锜B\"9L0Vq >ױPJqzt{@AW~E1DZ+pbW4H5 /W12//˳ijcPIx1z}y[P|:GWithY_ȖGH'яGg:r{oB@.yoS85zmiŋ˨e ȓwΎW=?GM2v3{)S9|Pڛ``k iD9HOڏ,ΗC?~_ƌ>c2NNy~^I=}:"%R*DWxƆƙh5'xb־OazJCgqO/:VR ޽ήeJ./Z?WK[[i&1tv|2AYt>?^dZsOnYFКFj)"{ZͿJlVs8i!DNt}x 7Qr-._ ĬȧSb`$~(2,Ʃ)F' hY<wZ̈U:|)tBsF#_@{Lz _曤rRR4 sJMNՕ7Qi@ xH uS`lHw&=,o{IRNa,ZYV3夫fEkw_q6ro>{QK` \P/҂`-b,G?J. nowEuu-V8:gRd9*;aV<0*Lv`,y>_CqhѓuqaypEQ gEj^eDпm~3!M7@!4!$3o__޻_eӡ頱tZ3~؞|?ZedE,.{OTKM}1SV )ye3w 9`Üb\Q6%P鍱S!=SвѲX H2ޮ|h Z YF[YAWHxIBP{`(u fI8ǘˆ s&*ph@sL.$-<=Ezd\I ^jSNj>}b{v*u͑Kx"%$+P<%$m=MzP+=iГgD*$,$PQf)\BCK <8PdQq*ڞ3x7޳a )e)q/1fbFEL!C> zeh-f$/\y_'y7y"&fzAm| X|ə˳ 0)%cȣylXӧvȡ!+S9 ݭ񒦽dS>Ei`{ 6!+,0vB9yvg0ƹf,.ynm2?K4o"%RM)LHqqW ]ppJD䁹h!6_n:~:}uԱ,[w :_d(.{>$Yz Tɘps5YAs(:hx1GqE7zz˻"uhԍ%@4Xu0(] Rz\,2!zbg[N F/7I@Vs.ޔW\wlL˾eZ` (Qt 7B IS1,&& #Ɓ \a# TV uooo?!t\vdd F)&1DG4] WP(x1TȒ:ڗvn'u8c㜻ΣL]j3ػy2:HDCr A>UQߵ˵cf2}744=u|TfKzV]$;jwʤ] >ct:_n3n%*to!~zqucV ^d\ {Fr@n֭] OڟO,#u蒧OᑟߋSAɧه[ Js~6Ex+3 sp?nhhvt9E;2$R2"l鬕S[ـݧ[x|FxEEx`OW@C]GK`Kن{9ɛin "܅🡰 ްnyѳO[:Zxuז^ђ#w||R*5#d~>pԊmC)! 훬dF ݄UJwX=*o(S) F8 IF %Z:#K@݃"@P׍.qkUyH\ENTɻJ!2=xAkY*1 gcŗZ[]z+>Ko4Kv~S ԯ [&ɛ)f{_1z}O5 ꩏QTn"&YACkZ)q&w( 16+r*>.7RY>o䌂 J`H5=hClZt A$AdU ļq8 pZx[)X#\-pkYm8{Ƅ]ᩏ.}/&]_M&.}q/N6ee"4 *ֺD) u21nyE`ת퉇:E5{PIRT¥ҕ&g Ωl݊BB$d j'V  p]P(X;km,"ц"*#jٍC7[Gzz>n!6?_ønƞ/,c1,CbcпzQ6.6J:/Ч4^e8omTMРl$Efew1҉t. %c\ZgE3J,V*βb9ɼ*7xHDER*7ldUdVDEFF<͚4MU8&W03 c'yI˗O&ɶ $xn}̤)%lLRVJG1b`Bo+;Eo$F<M]ӻt+ubK(v9 Q ʑW u*f?P. U)b.o-pLidU2I*o#9WOFV 3:Σ :Ĥ5b^25O^ {]hF;wI>i>ۯ/b7O;zOG%+E&w,8LX)#;d 2CiYyh`+4p"Y-3{jxӽqC-eRf⮜p|q?t9Tvpc4FL5@U&PRi J1=1UW<U݋14}Så]sȠbǖAU8TNX_nw*,v^T, =x׹+P b 81<0`!SAEo3 =^IN\\OP?KomGaDa1([06szK*3JʖrVQe4@*KU+FΎ9JUim\XqB\1B RsBՄ1QhW1[ mtY`*X#_bZ"g9M&yD4ҥ(xˁٌf.8gWۀ{ (O֡YJ 2zfKX͈t-[itr<" CE&C6*$ )K۲uF+n=W Իd*$6>p](]F貟j2]bĴFgaKB,#[рKʭV t6vL6&MrΔ, Z)0hotU.%ґrXIdlllB8V^tNæּ|`PF`m,z)S8 mU싾}c4&wLUVrwFio"Mt Ni,?Ij.Oe,ѯEJ٭/qZh-.T5 /yk K 5τu  YRyUuc0!ݻAOodozjxiDž.,zD<⊤JkWSjRƻT).<}[ʢe6iA9hM㯿muXqktESJ\q P[&gKk2qh2)M&5,1ިӻ.S bֳbހgqiT^1`u85?&t=~W^NLpIa_B>/(ncک6R™6ޛ=糕@tgrhe|Ӌ 'mtmk^_)kxѿ so8HW;7`7JMm@^K}k˜Hǫ1WB PxLx㙫:4E&LeGǏja+7}yK%c)9Z&Սz[H7p[?S~6E 3Ue6wM~׎kh^6k_0?wU^v7я0nSHs8֒Lnh. -J~*4HK&|4HixGs_ 597{k>o4S٤ȍ^Lvf_Jo6ߤkۼo[}oxFJL#Cmbd-Ų²ľku) hcra(bÓ^N4PCkmy-ڈ I(^ k֢l+pI{nvk:\4bF=AI[d$RcH,cRZf$H2]u("x[Y$0x(⪓,Zv[Y$%2me=,l5C  úlja㤵yY9>]a箶}-pqBɸ"HZUR]@wE{ sjsUoZ<|VAkv߶6qA}^^lhd۫b쿭R^Q~5؅W#GrRѰ?ڦiZL!r ݧ΁1>lbqZKi୍1 OVbrt1 tמz%afU>k3V_`䴮Aݷˇ>CɌ٘OK!MNćbr%koָ\'`yjh=W<^ECf*-U:W¤etІo]9DDRN(JdSqlc)yTŤ\ZCW^nƓHԮ/e@2L ʃpL31bO5n+[j[J5HQAXHAa+4*Lj:KWd +m:X)`j%טAؔY䱜u@z6T%eF%7M}|/~eCwYBK ( W5;xӽjtHO)Po8B&Dj[]$X̜짡V Z#[CxܡdibbJl\/񚧈av Mv ¤ȝ7B E'JP2 \-P H.Md*0<32lM]ޅRg$G~JvgZ0"Gjt4z^rJd  S!2VeiYfҰ7J= lQ״{֘ׄt?̺nP#5ݗᾤ˜&xK]]mE:wm;ngڤ\TTq_)PTwH\mJ;$cYͧ>/vn>Ɏ-pܩ.$XvYlzP9sr V(1[ 8kOQ" 6x'9qu>r ?jBjaR3KomGaCas!EY|j2+A@*)k+Df+㌨2RM⸞s8)Ώ /2zkj+YֳCdw2,qb,t8!d.E9j1*; p*fu9 L^rhypxdN @D#]hoMk` /A sB&GOli%LVo}2OYT^M6g'HPVI,TT%=@KsL As8 њVm ts!.%.76xTLBZM'9Ze[iM>,x,E2jLl GUISҗ eH1ΝW `)6 J{pj\HYWl3r1+D sk Yr }z5q6C^ ҃H3G:$ƺpz [BAH)u!*H OjfVcYf^<>4OO;ק&G-6OϢVYamٵQuNcmIq܉p|wQԔAG!e8Ek0,.KZ uhLɡ0ۘkՔ92C6R'3oBv=1Rj!VwUh^4,tfJ ޕ#_y}}Y YYH;w|%Y%C$ǪbBA$9MB 9B ބl*PKxgU*ku gg'IQQX;d+^R x2R3'__!գS465z anKVJ)@P1_3 :1Qs< Y`B>w(h P*Zk<QOen[S*)OV;5*1*!jc/(Z*D,>K(rgW3UX,FΖ5 D[eCZNpnDPk RGX ԣ5O2lw>\Uprk[>|R iH~ 7lǝ6tr2 8'958țί/&yv[BZJj\vA!^ Bh|gQ=h[?Ƴ.ҭz(pMM(A}Ɠz\C3՛F;CAmf}.ݬCLZ$H tmF$,,hn8nI(&|Jd,|ԗ(٠yua i(g;kkm{p}B=4P:]^t#ΪݵlrӔ V j\ƮX5wfuӛ,^e\ӻ${UKڲ;;ݽ\Ktۄ.5νfFR=Ouϭq8G .۬K ~h6b[jݾuƛ;N!r[]o}m<vAptCǝ%A5=x5y˥xEm{/wm>%GK܇ݦ*k9Z>-mZ qA|{V\(ĕ颵"D"F^,&p)2 p{2x5n.2=r@ p,gjqK/|J)ƐM!dkm pO7W{ n|yqT:`4Eܧ2*'Ed<T5N3 R!iHq)]kD[FT!$LgI'$qɍTy ^kbB8Rlfx1iGP,vggo&Zꋭ{zT(i0voվ]NJ ˙͞n6k)͝>!ybT;,giO #" '(`"2:y$A: $Nye &S!YE0YE:(%X-rUaa+X(XW,w3gɸ]{sKf:Ճ6p8~7Lg9bNCJQGK ቕJkkrQ!SX2 >ȠL%4! @MV:) FĢx.x#@ %C|GX PpD C@*QٌRCPUaDTGD<"oLFbN& 9/"us>tOP"Rr*%4 jEJu(6)M9hruTTThB@M T;5, 5ˆX͈2T{slVɮ >FJ"Q*Ri`ï1`L;m0FFT,9;>p`4RcW<4솇{ݟ1g q%fq#vNk,ĿKdQRbD:S+婶 G/4\*:&ջ첚 ͂n؜(ad}JcPu^S`ѣ4,"@d(!aQIрaFG5¢{ϵ\ͦtH}v,c]6ME"yoM?;I>!2 Q9 H]!,&&6 P.r̄ٙzwJIa%,~S8|{P8v2_?ҳ25VGuunj 1z 5HkaSm^Q@Lͥ]\Lv(hƣYG3TwGc+l5]VhHy(:wlg6^n{\<7uYȆä~zX?lm\t=zR$HZ30ӡ^zy7cC . I"An=U-ꖝa؎y3RNeҕ١ٲƢ';LD }7:z`l2 Zac)*W(6h=#)pF"gs:Ui!ϋc_vBw)DcLZ*C8 RM%k 0ùC)2#7#C$(6vM${Yκ@Q7'_8&H66|Nj3@LNnHfV{#a9/jIrDgR zI)%p*qDKRsFw[S= YvQPDJ]BE!PE A5E5vv D(q|Ҡ짩2|NdY(eT9@ls2J|guwAqsWvq{nzÕ>x r2yA}* L[AF[Hq 9Q0LlТ`s5G0h7(M9Rq@MAOtbEBP,8!Ag Ha"̲fA 5[n3 3göPzMԱu{WŠ5w=IT1d`QI:ٖiþ\jT:LI )s1ǭ1 s0ti`Z(YSܨ?xEkmM^xW?E>^'7"XSw_L!ڝO}i㧱-_RR2 I5qx f 4E₥ 6h/秱DӣƇ9E(3-ztsttltL8<9&Npk B P3q&o,!Zc%p+ƇYx4z?ضB?{WFOEɢ| ]|HKv&3~Ŗd/eXNSfUTIQZy֪D$-F[,z,PGGDdςU, 򸀅ey VѺUnixR U!Kx@yI& {[^9Kpb`ga7TT TPb]JSvlؘmՄ()Qg] H_] IlEx\ N6YɄ)@ŧdCQUJZ"s+h7`ʃ0}ma5i[y9 K*) 2FfnA:I%~aja?ap;q? O yή!|zs3?<x2$KwpKF Xlk2& X{KAb)&16b;`%LeXT"tf*itq/F/=+4hq ߷`zr/k[l3z];A @SBr b㛬S_4qJSZNi7Dh GԶt9ɀfi2y"a-ɤDBqAWt "; CH@ٙA* = %mT8.94n =t@=ϵ 廍T.]vU~uGH\1Pj,+=u{yUb'xZVElc i99k=!ʻEƆdayHtPvU ѣ`eP.jAzl]&g5iQjm-n$c$D6D$cPQDY̅mqQ:=('f}c3rY\ө/(tAUAD)k+0huƣWAG+4Ɣ(AW2؂$k6v#Bty1d}gr KܣBP`{SkbjI@ AU*^a!( P1] M j*䃕>@@+˾Sv6y"# (J{IR!R\ ]6hZD1  eUHf N' zc lȹ:OIb/Dž)YK*{D>ޔ68V9Z; 1{Eɒ0ƩMfef+kg2Hng۰UsZks8 yH ﭫ],A2DΠ+1+&F/xjC#3RuӋ╕J'V'$޲`:YX?j7j2&d~땃-g/T?.F}IV"ʘZGk H/vϒ* ze(I,t)}_*-s e*#(b{"%D%e BB"5|k'';S+ǃ4I{ґMN`} yUt!#آ%`o=Iѫxzl[lYZJu[}1 1#'q@Hhz6aDiUᅞE:cu@_<JIV3YL^VO]?HYlq. ݉Dž1ɗHGgtC ٬nяZe+7!M;Yy ֐X a=6l&E|qߨ @yIA6 ^7=UC΋΋kPX-λ"uVX #F%)#}f3?Rao۰׍TԴUZQG|NhlSQ"CWZ0Tsi{EYS_;=2 ]$ Kfl`b>j l~<-}4^fiB؟M_%1Ia10KU"Fl'sY}-s@ miٯK[_(?w[uL~Dzu8vڸaY"5Ӳ&Ms.Y]{<4Ef٠H$ά~";Ebw)\ZmOﮣqn,N34BЬ*/ZM}ЬTZ24_a&YErʙ`''+J}WWJ҃zYbWЯ]J.wuU\suJo a;#>zQkG]=JRWJjץRJýQWLn%sOUVAU^dquU/R\URA]BuPx.l1,щ?U-o*e4֐Is,ע/CY lpؠ1&u$K`f8<| )E=ߧyl߽Q`U0Z`4k4y@gi H4'N &Х%@w7>ӛH%778_j~sgGNOǟZ!2B1-uouw7o۞x0zshmnNyy`=#%v=/fyZU뵣mojT%^tcź"ZB STJmh%DdYȓ4,JrDŽ|)3Ǔj'?dzefsCx ;ZYʭ$f_F2W|)%TA TD,،wbO*/\=fŰA٢K^ De2CdV#:aaPZI![(6՞,%EZfRMPA\V՚L^ܱ@QUm >v,9[4flkvuIglǣ- N?~VwoS}3E|YNpCM7<䜿OOSp#1MP)l7`WJ`ɱ3$7 KrP<(( g4Kdu \ޞȎ.x۫cϦez$ ePWTlT-ەy jiּ]gg-W/I[f)x~6;XhQ}i?M`][Ψoj4K闶,Gb79Gvove)x?j7O *6Za-'mnvUcqw*;c>hpc[2FΎݻQȐ&it4.&0X@4<ŏpin *pϼl>o_XDX~$3-t,8,~~]'P'm:ɣvu i>Ul0ĊJ-HU,+?n$^r>,pii?\=n鴢ǿ 8-s:w~rY`ϯ(7K7a!c{:a`p^ks}WwgeƲ?7Zi&T3c3IX;(Zk;1O+89=>_j'&aDb Z/R6I u-؄f-=_h/'lo`6 Dt";Fvonswo6oHOufrדҎ>79{5G;OڝD?2->ziE-{O6]h7Slpl&x؄֞9)3лcmӃn'Z6wfOmȶD%40&_]ĤSDԭ6AcG[('6Hp M&Dm? >geb 0]=ľ);cÔqž8 ]p!0L9'2{Tho*3/Z+Q\k7=T>~=QmUmR裄m[Yo9,6z-v(k[zpm[[Pbԃz] pYB(S %#(120(ltx5PۜQ^kgn)aT>hezʭ?v4m~ɭ^P6-bm~<~lp*%v_/eӳ}V#67d7 YdOHrn+K^d/\dIղlyM$$y_Xa,kj]C!=zBoD|Ec Ln2PL@oXKΏ0r5gn{n0kN`% <YzF|x(2a DyZ5I(FGR3f˙ǩ$a<fokf_9(]$JO"?{2x+ : O坮۪/5gFpqR8'N: [E0e2.%xhw/DM&:}7q w,D ZKZEX11pG;;lf~a"=t}Sz&rHx#^_uyoJĄC$sYR 4H)>g.WEɜS -Zb#u<$nK%7>n9,'5*){~i ~gv|yqq% o{)qXGx,_Yb,sDTi*U)R͓.T6,qas`jf.HY3GRBePѲEBd"gSs+6lH- 9hD;IIikű6V3XkIYY'H^m>É]2L}2k~Q9 AjUW)6W 3yAC}kiHF͘sk9?ͧN-.Nuɂu-x77}qZ(ZgW?L.y ]T1yJT%U0Mnz/>ޑx:ϓ_Ud{[6 xQNV j>)4ΰ~'!o >9wj{yڤw!&f I.wbX ѹ|^Î_:LE[=gqB;_{Bڬo>;\WqGQꑛ.?8춍4Ӻ<秷7sV;ٝ8m:7G}3۫{U7.S2֤|T.!uZt?XTtx]~9z`:k佽}n-wguC9Ϸ{v[?@!Vŭo>ĉ/ga}Q=wC^~wXnZY"m/i#fgl@l Կ}oe` NКzADG0-g xn`瓺q,pJh8O YR[rjM.Ji^DLPKpS=.{N3:sqE?G+lzéZsH`ˉ\B 2*TE7UR ?PMBoW%Fbi(wڒ+4j$zCrD Ę Lʆ!ɹO-;E-vXV0"UpoUZ V!Io73i8 4X_}!,Gc;23WaU'?|7Lؕ|QTvhG[@I8MD_*oHLP?ь(e/=en*rJ.Vv>Ala1xtE(qޢ2J;8kY Y@QKf=t+jAU& e;J0P*т2dEMH}MmFhKP>J_H VST7J/?\XDZ1 aG|| \m\лg Wr3jVm("jނz 2*8#F (NjjdKq)ʤ($,N`8Ljk.siNaZr_~LJ^FtbnV kN"<5Wn%baƦһ/>_HAq?wm*$%#h l ]\KeC~Cv218n 5*] ]!/Nj8!LJO!&ѯcDIh>JbDt!g>)yi$(V4KWe@1=m ~o@eCEsϝNކʵ!Ȉ b|B]A!J$C`eۚ`X p*^0X0\ί/zCC?kGu/1#t5}٬1iRp0Q=(̮Qwo0} X H51#QRL[sE[#CJ`"J esܼo17јZJZYП2:an8é ꍮ/R]SE3|zk./>¶{[F26*;\/7':$ZR]tj98Eل$US]2bNQlrRm IQenYbgoTIDN51(&UBPJ2ĹhTKR$OλUEn7H \t܈* 5^Dl8ڮloST}{!&{KR!XD>\2 AYb{'.btH8'ߠ[G}~aϨ{>VY3əS 8"mSBAru'Wˤs󠫥9$ޘ&9[u,RW]D88Ym Ӊ׿]vF,C x餌*j`DIz3ZT EfYFDT˦|ͯgfy4'׳RJQ$xSCm~2IIb8cx7LCɂ}EIg i} e $z.!aI6ydp>MFb0ŘxW\B$~瓀r,?E D&WT+J⮚cXwl/. ɳ8 xdD6f3߅P8 Bm#BQIhT:JSo*oy,wΘu@I-9HzIX&Q r *X{#w*Wn25 ӥX S9^F]M G3VLBX"0Z- '-p}=Jm;o:Ξu}uyqv6/c}quPK,UgL7XNUdl_ycuJh>~R%޴l4>Eۿt:Y b1HScP.BpSŐ!ٿS;gŘcLf"r=t%RL&{RkZd_+:/M2ؒ35ctzꭠ w4 JT@V}&6=7(股AHhrϢ~Xev!C#qLܻ蛒;ew0CQ簺8ko/m N@zG1~A45 2ug3eqS?vݦ m3/ A'Nޝ\W:׽~ǿ;GܠqStT 5H&X1WU+Ԧ6-b'OFc*>w/XA;(zzgl|iX.v3N>k'Wz\_"?Prrrz׾O?{78tt̸W_߀?H~JjEоO}cwQ/kҝ\3-t?yO>wnmX*FclWLiL̤1o;G$A=GȎedUL0\gGtqju6:i@RphnӢ9fzMٺpG|mԞ2",* EbkM褲BN@3>*iENmtX{^G,ԇʬb:f敱!rHȂV礵-~BN kZEV%n|p^SjǔޣalFe4zeuZZ(mJf9Ns=6v=bm=>TW/YМ+ǎwoγ4ʆ|ݴ Ums4opCQe\ؙۜkJyDtbt0)3E%! k堅Ռ{ Xt젳Z#*\e.3y !)o1*l%:e:9A<+p09;2?7Ŷ$B'##O{h2-][-&68Ⱦv ,}pqn{ZG=YGB*mf n@ZȵtgRF̮.a$dմ"FK3J6ZñF6xp[ %bp4kb!X5Ex9#6{ԓCi5z2:q {rRUٍe62{hh:#JՔM7夣@h,#)]SVyINprkw$2V728,Xbpj99WC$G1()3znPs3i[/Z@!VZBT`h2mHٷ)t˅ӨKdo!Si*o;y09;r΂ >#կg8~uzw^'_k@8nshf =^frdR+ڑXtG"d"%S nAյ N};pΞxf:\>s(}eTߵ'Ug6^/ƭq;+j#ymGj&HEm|[=#yX1Oؗ*_AW:yd7LXns勵ysbC"wٖXy/vLaCl 禥sLNh u"9zx`) ېu .zю'Iy-.N|کYV3h ӓ ㄲ_N6$TÓk?Iai]I:=x frO&trhv/J`ؗ$,g8ͼE!ԀO-]ف˺Ytlj?;9/i*xX,*xX aIRC?Tj[1 \`t1":WqqE*!V;(W(Wb'r" *JǕޱ)c);o\u\uSqȫ⪛Jk+Wjߪ(^H:^ G_+R j$s] HRpjCj٫έW$Xc]\+KEkp"+F+/W(Xr+ \Ze+R9WBq(W$ؙbpr5/fJ9t\J*F+M$\OW$Wc]Z3xT28J\9=6d Sr"~lk0X˼*<fm5ү &mLv/כ!u"-2/h3*ϛe\TpM+Q`}Q-F뭵Z#ЪwIJ1PRuBҸ5nOmCh6E!18JSGcWIv(ndnGjH%1z4(j5kQ H)f5ԺJWcN+GƓrP-8H̎UoNX&$CߺU=wR+oTb:T\[\ &uABbpErA+RqE*mq%  -{n'RqE*! \W+khRqE*!B_E+k1"J5r"U\WJIJjGǻɅbpEj:Hq6J+Q\\NjmX(qި2;_hvoFsJ?I)gb&SF~2TB]ij,Y! T}ReqQe9$WRpEjQaݱ)3#k: 7U=[WԚ~8}ipe+zn`q˝a:W P:HfW#ĕ0 Z7F+D)BӪn*6^q*0;Q kX)" +RxqEgH[Pe]\QU|"U\Wʚם[y)P}k7+TAR jZ xnM9"N+8v" +F+!4 :P;$ks~I2ZXzFNMrUl:2 ;UT# vj^GCj:RJUWX /i5[nɅbpEj=W҉8F\9熐_ulY߱/:u}wSi|񷃫]ޝr&-W$Ps!Z \-J5l\-T:rWV`ZXSPk}(\\J|"V\W7% c;ֱZ`r"BT\W =}r+qQ+ֱJW#ĕ0VHpA5\ھV;LXq5\in+ld1"XWVOJ%0Kvf@l̾h!&ؽA$ؕc\e)'=T*Y \`\ֲܾ U *F+c% \9Ќ3;ArTRa"ZW p٩%;eT; {ܞYtTO7+vڊ}K0  L++T)"J WT\WBjlABog\*=t\ʁmbz\m(Ǻ"g3t\J*F+JK V\+ 8+PjR+\kK|S38F\ie)iesɅbpEj5:HWcĕ1[63 0Sy06MVrtj*$L+x7SJn5 Pv<lE9$WR<Rk?_*ch֌AA"`ɕŬƓOU1igEI vƣ\Nju$0-R9thW/߱)3²wtɵ=;pM% +ap+zn?aG7ZbpEr5/WN* SW#ĕ0VqLbpEr+RqE*~X4mt4Y6 [bz3unR;ƫN; }^(}.`FV )qHwrRdo!sz2.\YYAIJ@oW|>.?^{}_ի_K??zlKw_>V/VW|7YM-Kr^a:\4ή/6h py234TήsDkzBrgw1WsK®ޙ>_ю!iL֐Z'P S>&!'%!\6?N›[Ђ7?;qʵ=Br~{zwN￿n=Q+c_60h'?Ls˅2f9,//W| >tuWr3|5Eb)]V8? ~3љk|B<@'G' ɟ.4'X4|srp^Rt[^c>oзagn{p0C0vOlzygg[>onK](ڹFJhJYmrKsL+#{i}Yg۝ݭ .)Rj M B"g.K ֚`]T) }b&ǒWF 09 ? wƗ\{ M߼OnݿFnZV;ƿhQh[gj@֦;tjTSgTVj)՝N9x/@b#fW(Sx9sEh%T7=) E%^ I,KƤC5J bQ"qr16EDs&3O8KNPXaS1qvyK37$.)  1B{^o(1L̠lC-"R2Ev2QIWhE\8{[!K##fgx8PEZ%ĸBJ5N3 R!iHq)]6" -Z~Sl2%͟ :h£h%7FP%x J[QٴNBZ&]]>XMO2NѓM!;Ygau|KXZMOu|KQ{ngu19G.xRvMfˡb~늢:R'Zͥ21f9N{j5&FqPȕGI@!` z.KGPT J#c1qWb#P,PXxT,\I*3lsv34 /rf4ͿpڝFCrI.+yu"Y(bsÓא=v qa+JftB#1a D68OưfbvCQ[Fm٣v`F SZ2 >ȠL%4 @KV:) pSJ-ddGX8"F!I ꨝ+ٍQ\)D,6>EDUU="nFdT,8( IP'<0#AMQS(I4k-PBhG)8IiA G]D%SQQͣ -i&HYZ8{$Gj8%_gQr(.¸{\qqm$d*"Yi 1/g|:&pLTiG Ȉ%g=.;c9)0<<26?ki6";;G?>S$z}#?`3=-Ayycܥ3d|&gqVc^1\lhen?10s{A' YB}hdjcрM)^Bz)<ziy4I,[J&q$(< "yk; +c & m<ʽqvo1p<3r|"d hCM>H )$!+ o&uZ_L; yŕVֻԢEg%C&Cz)*m n d!Tij/.xM8UTy}%tӂE)`LMHJ#"L`4 0p?$4X?o4}' rkk~p[|vUEIɸ6j5gk7F[EuQjSn+~P67d ]oig{RDpb>nikj.Ejf{VY*b4{| x!rYZk|֠yzqa/oD=?OOɺ9|zΚp7]_-'ݩhuTF[A+7x?C_W09Tu b >\] a@+dRJؔ*W8~#pTP6n<N]M7o'9p6h9UD_߷ÏL p=MS.0]Ms?O.:@젶s=[9MզVF-jW#?7NjofmNbӇE[nñq]'y+C(Bm!8!{}[vH[ޟ S2,Zʽ Q]ν?<C[o pmk 4\J::A`hY(N4ˡm{[^ iEX![$Zeow>/v<5p9^/`="-Ė -/hq#eb0S44ki=r44 ֏x)Ysd!Jx>zΡ${h(.b1ERT; i'qlzFR82"Y"ԗQThY@hW<ݮ B0EhQKeAYI{cf8`(e2Zf$F~2G`HgUNvqh6H޲XS-sO3)ZYS}wᎻg3-1L94([v _0̖Q1v{X;?`Y *oRY6.A`B ־S:q8VwRnԁu~Ei9XƒFB].`)xkA.CcT#qV^ԠY;\ob+O<7㧫X)rsǃ9^ k{Om]ͻf8"ȷ&on94M j|ѶZhֆZmϴiY¥GcZ;R^J҉ĊÑbYE*VS)'xJ ! 06 EY)Ak7 +MbBFĴ**oq.]E IJ6 H5 Sk yCq Oڊgt:0nWAj*O h9%ﳧ>C}(F"{C~Nemo07~ՄtK^zGfJ*rdQ3M. ofW8qע~ݣȣa yM-9E-u)VBqV|/5yupW[.}ecqbQ un(9EJZTVjl5?{x9qOZ @%L܊bK)4 `:@I4) ꔶhbZs-D ZS}{edLPw:|vѶڜ~Q/e&H^jUUIYp(Nf6֝'74c~6rv_,ꉚc5:={\]~,kIЍbTWz\|e@se80 UdE9}==Qyc/ fUh5-Kvt^lbWSWW j>Eצ',U;N˸*m|jOJqt$\qbA?+r`OFQi :NS(wsˤɥ ^ vz]:IDz^UJlTёll8Rl)%qЩ]A&?06 䶴…1{)֕DI!5xc Bk,AU'4?_q2PtL9(i)TN9jι&q_ȡR=;O^ذ7wj1'c6J!G2udlt]&X_^]<[nU3\$jhU2³>ip*ʌ.3:+1E {Yr'AQeI,g0[KUN)= 3c@?HKaˤYLTmFEez[0V&[I~)VT?0t 'Qrw fN}O0^>سT R"&BƔlIg &"0|>$/AOu/:V!]V`KdrCXR-GEV>Oz'mI"~@ z}4I@9$5EJi͊cqPe..f t,r@]a*M9}  7fzˆvsن1\Po29$;b"۞}5-QfiMڇ $jJɳxL'`!Hdtҁ?b N-C9A>HuBœl2e1dԔR||qO۴Ub%D6tDR8-gC>-[ (o$Ebyi"cMJe 8E8VUs]=E!y6Dq8$Ftn$~FFγ%%5Ĺ+2aR\#7 `ۗrk5pb}=^rrs?25'A_ ^=P h{} Ib"WR ꄩD4VAh]@~{_ [~>=d,Tv6z\j\E1&-O@W:91),$3׳2s@QY IrJDQQJLm@5'< 7EvVfa:a"'!^ID-b2ij@&IZE6$C6 ӷ㳫^`Z">V*NYŊ&5etqggrϢUuW??x;ɏ'r Hx}͛7 |tKƚm,ji<*5 hh禁{/«w^ߕ YDꗸ4|S;iЀdh.AU0 ^ѭх^}9tts;|(c:S׏\ v_9Lrhr)6uJe겠:;@=T6ip$=;SvMnm.ue]y)7'uaTr;d/٢{G_T׶~::zm>LƖ~6sӾ8(mp{\ٿ߻*/32 ;װ/Iz],^!_\ {%l]=7L UNJo_]_~XI}60Hi qyh%+oW׋D;/gLA~4M5L>Z+5 Bd"%Iئa34\IbK^ [tSj.&z&Y%ڣL19۷1Vz~ӋuS{ygs >4K8}GmOo<0K0{x7.NW[%?P@2CR5nPM3T}%jCEP5u I9&c!S\6mZaZ"{@X'XDEqFABJBbM )*s?6`g/ a G23%ϟe Ց!C И5'^{$ E&[;e 9.jː|4]h*$)xw[Y({h Ue֍L=)_ޮ/NC61D.`CAuYd|̙R ^BUJ(yvכ$@ފe!o+ns)) UWh{Ur 5Pbp/ a۬; )NKO[Paı%W8ʸu5V>\[ "I]KgCrS;EeĪ<.,#8B9$kf'у`QeF՘9N1uX&tfX5!*su%Ԧ֥fI5L#ˬWфjYLtʿ)$BVbRASl okB9Ћ~$w(S:` 2J q9eq{ŢA($͒IwAbx@Ɇ48ir>lb(N e+n֒y+^>)ʻ9q'Za, Z[l7gF* 2$qjd- !0Ѐ ."vv?)$ Y^dϳMo)kb?h]UeAq sӣ]Ę *%tP } 0HLi=auzʇ)O )~Fr{/ Ԥ7ofx_<)!"A|>Ҭ,ҩY?F}g]vCA3Kv@5w.:BNH/$ƿv~w;|}m[e`r<6wֵ}<ŊAރ4"; c aCviSӖOH/fsP%o+uj +6hj xmkR EQ!۽G,9;(Z$]IMhkܱGX`Ȩ"[$1#|+q"'8i%̚3@@F?"hm[ yX9 Aٷ!nxݧb18ά@27dMgx]i2~]D3jw7zpC=bQyA ϬsP^Aqƴ X̐}9Oapȍ}w蹭@Xz `M?lk&C 9$PxZjQ?|f|sPyG)ԆBA}BwV}:5H(Ra(Q(-B0^|2X8^29%U2YkO{>QđX6{$9!z~[m#JÔbS9Yc零`U;v4X]atlMf=I>ʳ7"iGJ,?w z $akRg\7H!(1m9`HՐtՋ7Bw>n⶛厛w@cl:!d([T\^:$Hdۗ/w~_'~!iF=֡، WW׸ݛͦ1?Eݸo/ )qERN ܽX@qeWBT': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uhN .A$'NrS<;_*U'@: N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@u9`rAprb%JqAmv' u- uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R':f9@qN Y@W D'P4( N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@Z~wk}r[<筦 j|8| $z$F(ȸ%1.EP+25.-1/۲rprXmqɐjTXpsA$&'ŧq*j Ipłz W,7)օO`(IoCp~u.\Anb[ϮrwMq \λpyݛm.w^9G S呭UXm֯OVnVoxqqx=\N0a.Xݰ)|ఓN@tydzwe^ϴK<2_~OxW*Y{55%Kӏk>ҠGEߙ+xڈ{ؼnqUshƎBYe ٧ }ҧ`]npk_h0>Q;F~@m0Ć`i\Sgʺ mArm"IٛhT4,7e) ZSj+l|IjigX&WmzE/W|+\bjTFkW ĕ&"W,1\+Vj+Ί˥(W6q*w.AP\-WUq ۹pr\c?Mem'+Wq16Xn3wq*]-WBtV!8,W,7)lbbQqH\[scsfʭ&:J}J"di}1)L5z+ofR Iڲre9K1IPEș/f6KhX=2:hX`C,F+fk6YnZ$J%m\\9[Ym~#TNW&k_ N;2MG<4IG*u pWmz] V 89bK=֡T&Z \&A`\AsF X-q*CV\-W>HX}i*SQ\-WF l+}ro|bO}q*W\-W!(W\P+&ճ*THUL89S,7X)b+WK[|} ';GHٓ3r6W*tgT \H rƉIYN2qUr A 8 W,\k,U<)$4;k&=uhTReqMoO  N XnRpٖq*S\-WDK+vT>RƣpO~q*mQ\-W"I''b$d *ju$ W\c_6Mm>J_tj&8pub+VKv\ʐWKUqﵘ0~ȴVYU=\\Q{+aR:-+*)L!>$*HLErRѰR!$[Kh2d$(gf1ZkBb+b'IL> ^ XmʵUlpltjl oz8M?^ij|TV\%CR+I 3[l? W֥q*++|0$W,8f1bHFk%ʻX$1b.H v\ʔW VYq\ܐ箠29Z ኍIT;f#f\;Xe*u, 8;9sW,vLSM<"qU\9 ;ҁ5O5O}a|6EAR5MQ O~OV ?3$-UCp><1Kլ6QbE%⪐/^RvU+Ն7lW MO RSGq5Mn>ΚIj?Rv5Mel'OUV\=-_$ )\ܒ jWWKĕ >Up49L!bԎ+Vj! )`\\rRpjcW2qEWz< 9AlE}ܼ? _7o>5^KOW_zurC_^:Y}B}{ӳ͋{qȾo>W' Ŀ_:|mKx:nxl>rzG~#۾ZmGr?qr^]CG >m8m9 /mh-3 7-߶ؠSf g>kx쒃Qfuǥ_}9W0gY6SdYٷt+xtӿ /] onmsuvr&v/W_& ΈX.g9מA y/W,891bՖ [Z b%A1'W,g)bU&ͮv7Fxs/xڹC g5)L@MLjx ٔ!1>N^v,Wdv\tm_YO77urnQ6<&3(4GapSFKކvc~LlRpѱy/܆Odc_l~:gA\7olg?y\ͻ?6ݏG^k8\ qݾ?m|%w\߅}7k2SXKݺ)9clm*ٍ}|uQLc^|+B_Kɘ;5*Y6V_LʤXLgĞ+,eOo>\`j܁j9BE-U@pr+KE XmrU]Y|pUlrjRN O-#?6MMRYle.2WEqЦY/ W,81\\c3MemAWς+ WȋˍA XwXSm^e`0f|XL~t03DIv{e[-u٪ tW9KcNWR^]#] ꒺ƼŪ51fDZK#UYWJNWFUOWCWҢCtE-㝡DW誠7%sWHWr)dʲ/VK3iIƭi%Cgl+'5\},(,OØV!"Lg+tUЪNtuteЂw|.pOb0~ J۫wCW%@J0xe•UW=Qa(%׭+^NWk{:,Ultgav ZNWe"kzz(*pQu Z-NW9ҕDU!uEp53]VUA3+L]` 3tU]'ht J9ҕܪ.ѕAw \ޙ[ J){:CR\*n;DWSa/ee<j^1FmL÷{G˵ &'3l+VvcΦlsP%S=;W!GCh \ͻh-w49CGc0%"p ;4誠Rΐ@HIt +BkNWY,٩C{,b' : % t=][ ]`]]T+IBI+Ҵ A WڮUAk[8*{:G v0ݡ++3tUТm;]m[I՛J!];CWug68e6mJJ.]`8:]J J9ҕFJ!" vm \!BWl,(uOWgIW/>)fW_p6Ckǵ5ZR7Q7ۛiy᪡>)aS*&4>Y,+?vjuǏ>掞/#XU@2ʡCp#Ӣ\ǟ˟;߈lkbM.LB~?u;qCoVMtx|7K.o^F~W)ގ.b~:pz}||;yu!篟o6No{u5W)lk놶8C}tXM./.l2 Y8c18# MP}p)l1g5g4B55qݺ{,qTtw1 7~`r?fWKj1RB Ke)ڌ2f(gU1}sm͝^Y $fY Cr!'_TfXU>mW֏Z\چ7{~5{:I|E>n/?JPxP Ru<twcZc7Z$;x9XJF"Y?%5: a%ꛒ#i:aPjeuTtfF\`\Zעt'GͻQ |H&zwU ~3 dK]9w4Q 뒒ϭe6SW~2rVVT}/1ͤ I̺f:7qZ虐c"_ Hi$[~8nq6qv\|%qvNuۘQIћ- -gy.Ig>3-1*;*fuzLd4Y IKr m=ZX)$u<3T$=ǡabhln&aA-ҍ׍;0PIc Jru~Meqrj}럗7W%Sj 9Z nʨ}N ^d2d#rZE$i:%=r/j$b+&}ؤoP:P(tBn>ZLĜ0uEc2b>Efљ%.y+@BytFU6}M}LI&h qҡ2[B!eH-5 ڔ}kQ0KӶ*?YO-M)]jL O4(*h|:pU)+x|蓵 sB;sei+ɸ9R2iei&.C1|/*2q11$+H!%ɂ$c $ c!28ȢDp[J1E HLE+=h@cpWV |p6Vta=N7 _no%-ѭbh}:7Qx7%lyζXza{#Q4 X]y}mx4j6~}ݠ6fFCiOٛT*bn7߀JY ή SDwKLR2R?x*p"eJ@ZШ#,] g-)`:LN,T(MEecE.7TugGG.1-*VW&qV-@u<Jp#>EQbbJl^25av MvW\ܤ.U8!2,\Q[\eV. Ld*0BmXA6qSBE6DI:bf;_yW~Yߨ@롬3" S12V, ,X(eBYҕ|rokյ\ҾvM0]]׽3{kҮ{+WP_гԆe'l>[A~S}I"ީS׊yz}Pۻx0ײ]/ {5wݨAAU<-__psW\=Vͺ&q{ʁ%1Zrerx+ >E"mp&6ю3IǛs[뛩ojb׷m0'0uqΖzzDCY8+Xy(N< 'oΗ=rY8 *"˙sBG Z:H$jDFl6g\iE7&gF`-]F&v鍁`s:* ʋT"$uSʛ  1pdg 9538Β=(Kib>'Oxp(9BЅ=9W1 }gyҧ{)#ms]ɢ9CewN8X79IR|/e" ^In&;۶-z~`e5۶Y_dj^#y2U`mJ7N%ٛbrI˘NDN0Я΅+ktУpeo7 Oiri zUڨ:R[ˠ -жܣ+y(C>EkaXr:H EBZ)9&g33t=S ؒ%f:[xZ캰S r5/WѼԻo&FE"㖙RT1htscHX 2:M(4ڃΪd\L28l |6h͹ J8T90pn =xzx:h")jSW42&erY_5׿\&B0]1 U*[RE_tș>hk8#4rPѐ`@j Z3k<>IՂiԜd-6;5Ҕ0 ؔG>Ԧb:#LP8+Ҟ7ޥ18/ .KSLT&*C! @jS![p"\=^Ů/aOӞ/K?ar[hx i)g7b6 |Þ^_@58o7L]L([ XI]E TNV1Lq| _b{9vag{ܸWcmQd{wI n7 v;rDpݯRӒFñ{M7ŮXRfb^LveWҼ8S)s>mm3/{S Em*EOxې#.B H!j-?}ӢmZdّ1h(NC^F§EM}J ioޤ8-z.Bim.Mnf$*%)6rۑ?Ho՞3f|vIObcצ睿;-z{^;Ą|s|wuzSpp\/K. ȩ#ݝ;Ƚ$mN_sY2/m>!+F742َ?==c%fSiȆF6;n=~}/|wWF韯;E{y<qYꁉw]d&N]Ɍ`fU^9!&v-8̔-CwBX)Cύ`Hd(4Z"Ȩ 0O$(GY_ t՘ѥh)jNtpR22V#02Uaa58 2ˆvs֛t۞<z v@fůj#vD4[9\*u D U2DC:9TR7nF1c]&Rؔ!3vYM`5Ù43JksbCAjq*jʨF5ؽ diE?6rFT&qi3IˇD %#T7YȤҕx 2!aMLHM#ap>fY :}Qx]wCAjq*"ʈGDqWIlIF;Q8I:'g%$B d!Y *"Zd'۬ˈ JtIIte2!%-N$EYsDY׉^'\\uKCuVSq+"8^:1WVXx S |rt2QcT"Rc9STCs>U ~T 1#E?*x=c Ke;CRhЀ8kLh -7*Ai^̓sN~.=lA OyEF8 6y҆!Řiȑ, H5`q6UnR6Uj!9`'1wu[qsh%^N6թw~GP*Hx_*I`!Yw(K'P꼺зz]cCQu'e7 Od%} z :FL0h2* dM\0! ٌ0~w?%4{:Tܛ;Ӷ3F]v݌WZƋyl}[jO Z-ĺh#d~IҶ|W.(z1^m~-'SXM-LA/tE}֢Dpzr#/.X'oop7De-ּmSkČL^yĕ߿;?{C|ůgoO/ʧ,c;ssj{b< ֓rM:?Қ~oW iҮfiT譫qJ CVO3?R&Ɲcͤ^ru:{{]/o,W$G(H2{"%fTI2wt2iȣVl$=#9%0 PZl\"u-H,K"%euCyNB[Ɖ7"$D$#23BmnrV#嬕>10jItT!Bf4W >j0r /,K!fE YQ2&A FJ'N1_r$8L d b<^D#Nq<7}R|?|E\T^ j1K]wz{ݡf[1Bl*_UQ9Q7Cn 4^yd'T,ӿ/_Ri2[w5s^͋#wt߾ukh˴uSⲚ:]l튶cO)l>-znAU@TdS!@,'Fd:g I˸VdS9*kwmzw͹s}b~Ū@e7# u_9/i/ 6!D%70>'HyIy $V'2d~\}"?!' oNN>|T?۾ŧMi6Isx/]g_I> :w} 2_Ʒ6(m(F=?Ëۓ-qqUEYQZZry'Q>D}}J{wPA9>m 5N ՛)kQ7MYWՈohh!Wʲid@Ew]v5U{Cc]˩4Z*L'g۱bpC6^6v5wI;/?.߬]o/g+`/İO0[1{>nѫѫl{ c7|Np c7nOwa2ú p0^ ++wub:]Fʄu\]1WCWZ ]1ZJNW2]#]YU J+kZѺWW2(cLxr ?o7ҿ|:=;'=g)u-b]ҙ]O5|*Sw6eXG}w<܇|6]^]|<Mޓwvbݾbúzz̷g|s&>;-sܛzo1y?L_iN7=֤kx `wlXP ~~Ǎ ˢ_ۅ]>?{nf|3_+ Yˑz×! qĚ̴8Ѷt3E[U.1k_ub 둧?Fᮣ+7o>{ 7.eooPI8Jn xh=_Y.E9;6~l%$Sr!U-ф:WkUn 8B.ڬ{ӴK'sxHÍЁ5 |v$B%I:,_$.EL$XFgfR|WZ|/4?vylX4 "(e*9<Ī7nJ|YY,X]A8&=f!ɺ(BFCwdu}О^=VFU,*;Gg kV/ٰQb/+P6Pyphٻޠ(T6Ms`{A)%x X5lGD*~bswU #" +ʄf`Dᒖ ŨQAQKRW:a j*6 FDg7d2eA Av%4/r?b!8w(c)(lu7ݓK@ JAU.1a RLA΂ŘE cG!!.A D5t)#hBQCF:*lL6 _ b7ڛZ(Sќ)E0GRFhі%@ Q 2 }APS*(HnS]T͢%<ʚ%.qC݃`5Vaps]młTM%1,4WU)*K; 'Q_r0g*7CFPr#VH܁m CǏ.,d!:T?Qy'*ƝSL l;V+.$ `M0~Jv?=ܾi;];4"d*KqSе2"1wP:3h /sP-XG݅ZR@9ᙅoB]IPA>a^4Kq[S-Ah[Ry !zbA E9Ģ-&ft ہaryhН%O@Gmlc:LˤUU2ߢn$l& -/gh5g jxaȵ6vٗy`-n?ofr$;, B4tsYh9/> l\c F ~wV8Y:Ѭ^k1i3jFqޑ4Zefc-姳 @zQ{,`r5wn+2TJ`]P:LMYځ DGCPCzG:T}f=. >N`E^1H"^S:߻~#䍡"UA  PeQERPGq1*10|ó[qO*6B?&գ-19mp@୭EѼRHkѬUVm(f҃d &S@rZx tgVAцὢEa3 o9hE}C D σU!-:ZSDàe ̀|1Ppe6.6čtXzGIzӠlCi#8 8%oC)z @ov!уpQi46_4fs˥ZpU.!b!c-ٱP|eśPpԅ.pf@Bׄu3BXD;j0BI>z# R DŽ^qe/ns Gg5DDTPqM|i !qwzܜmBj 0#Ho5ۣ#6{hW˟B aJ5WqnXͱ@@<; tN Ҩ8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@G"^X] ׹8- ('1:O $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@:^'JԚ@z@Q 9x'Pz(@ I'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qt͚@ސrqYϝ@@;2(quZ'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qx@Z$`_y)?f{}zC_svIx~ *^q phAt%FK`\4\0++6~=>KWZ5FiƵc+Q:I]y^8+]m+FeBWHWh>^q[|M sACm N.x2s3NoWǛ{};֯-ë&0ˋ~ 7?0EWF%[̅%~4ޙHՇg3'x.{__kV.8̍xN.ߞ]Ϙ 9a\_אer hW,E/=DϾin{ٍN[DUaxΞL =vrjZ zjS ɮ'O<|ޅU4YMEpZKEh+FygV[*hb" J+kV3hI:]1J'[HW)״ŀCX ]1dBW@+Fy']3޾JZ0]}o . PzXtew++tҮ7ƧO=> CW xt(vBWJ&bn5th AT ֮pp+﹫mj7 ]!]9>گp+Z hNNW1]+Z&:tʤH*Y3wiZuOߑvޟV}>M&*N|T*G[ъgOj'5Yf>tZϨS iEt].}׭+uttD#]%cYbAn\bzbVCW̮7h`p'Q{gnp͞= L;L\a]K^[Z]pRq5tp^ ]1ڠe BWGHW:Ҋ 5q-th5wJol;N6k+F5ζ7rI܇;`K^ .t7%Ru3Y_l53e[-K*j#]!ú]ppZQEUW{ѕ2 <_١j."Z_7W]EWXHWlp%huOY*(?hN%n1f`[Z+InkЧԕ$k1j-fTY-]]q+Qlxʲ0=V!\fhF[sh9#]!mкt]* g#_קߎ̖Eo >LdpS?'NGZ!DG9Е+SuҢVK銀d+v ,UW{ѕVXN+g q\tEF+lPu5]i'XFN pu+t]eUW38ZHWl4+5lAGW@T]QW9$#]!ӆװIwJҋ:v5G]Y/% o$H%+Բj Z S1 -W_wL/J_h MO&P;r[; +Fk9N]5MGi ӕ+WuҢZh aj.."ZJQzBq]!VA"҆ګ+rtZ{6"\khCRUW3B2FW]m(~ )eioAڋ (_+|oqS=6[RU]QW<$\fhm"Ǫ*(Lyv-W\w`kK늋7ؾB:YF 4pDkM'QWs>7JJHW4GՄk]!UW3U0V芀=g?t58L behDQBa,]IHWlY."ک&jR֔ Ұj+t]sUW3ԕ^:HW p+JQjUu5C]RF"`'D\%].>$:v5K])FB`$k$DQv:%銀c+ul mCHsfۡϸt%n2]Kdr.ąRl5U Bw>ΫlM`c3N4( 3Ƌ /F\ t]0sUaohޝXe+µl+Uo=7"JUWߌ–E|+IOxkԏцt5Ҕx+Uzi`d/mjWB}޶#]6(TvuVW&{r\tES8҉ 2A1ctES/4ֻuN@ ueVp+'HiVF"`'w EWDQl~t0mLw`jc\WH'-QNY66HkMFh)] Ell\d ǟ3?иߟsTWXPGQZ2t%*_xxv{NOҟÐ$볻Goղ$ggSo\]g]=^\S~c,UME\yX7gjN#r9l93)%ʵZzCD%P]$lS`Fn8{lNty +M*/R9Z8 [Deq hXz}#l4mhvQڤLjZmhQmu)n'ơzArVqP==^ceb"mzX?}lfyv}=Yds*_ݜyp?'ߵGD]}ZQ֫Nt;@ Jqxy0YkV7fQܵ`~#:JeC3=jX_s_`]0Cutpl^?R sXn<]ij`fAo<;9Y.;>Oq"ƹ돇+cՠs?DGtS':Y/WQ#}W ?KVóS̸|0"tv?\;IN1ic2DC Hw1`x >^= vU`⪉-#aXQދS݂tmWy;a2“E|oI]N"Ӎ]Qɗթ'VɬZ#c7]&zOFtF A/HjFh[Ʀ; : Dol:Ƽ FƝN.hP`UYz1v"8AcN-(!9<qist7|= `&qch1SFYkVaum9wW~&!E?nu"ʺ:٪ŠS<ĦӰz+}`{ N:''">z*IDI`;O7LmeTB4Rmxk^X8iS31sQИSdZ'h#9Zfm@1DLMӴ]sv"İӀNƵEd#9Ρɾ Bt |n<БCKʠ.2#_N;$""SiNy}"SOO=|/?uT_k`<@X*A AlXM1 nk;dՖ{?ka"Fq ֔6H`1M"&ٶ! 4Ri݅vESZ $V,=^ҪTP>;Fա;jom&q;ΖoWͬ_VQ}mTZN$:,N -Dm}R&&pSRw`GǫknO ymM4픑)[bQ݅ZI5O4޲umnm+Ĭ4`o)ug:oBi$4.`vm#v"ٛ*e;mB#m鬜;J|t:d$VDh3&ru4ΝH:zQmmA뫌w^vVEJk|[4LĔ 5u>*}S)WMrx"Yk"u!$ѶMmk`l./?:vVC^:[ :65Y,M>P'L4 9ؽmWMв|7;V΅,B~ѩVF SU(gʋXf1E]rצW1xlqԄƇj`T!y#Bχ_ڦ+㜌9KDMIkmY0IR$aQpҿ77@ף'3ktF !GIż6H0U<^09LO\M;"=+Ƌ:`ў]0~X:mj;WZ9/!B6xI!d V= EkYE}pڂ 6cuZcnR1z5Jy6IaBk16h9+DTzQꚶG߁Qm3tCׄ]QŵIgȕ_sRB $u+Tnܼ܍k mԊ})4(Qk`89ݍU`p%E5ꬋ oDn2Ϧ11!.તDJ&L fC^UJZ"sK@!ج`LmL*3,K\әG$S ȒJJ'z&ZZEWb` < @h6'H}1!Wyr[C~;(j|(W姷| v"/uW飹j%#i],]u6]uEj0ĢAtCj m5:' A_FO~E[LwnX;)ćg^<Ɋ6 Z 1RL^[c1b0dp Sf.4Z<͜*i+XA5nɹyE1;N>ݤˇ|Q3B_b% )䗋O>ӥ~ۣ?O`:$C;c lf}[{#f|+q>U '\~z/Iu454gYGhh6ggi<79}Ԁv*/D; : ^w=EPX5gC[E%01U' ͆}-wQzZV'$ Ny2#Cd4ؠ:Hh!(M^1Lhgh-,|,E@l+TTIfLgtp5u$s*#Txb:T&01 -T|z8ApHT%L[$[؆t vuZs?iS6Qc8Až՞'sJs_ hAhs0hP8а2'"Z˧/tB/1bլ4*uF,uFu29F:[r(p հQg/S?'77IrvMYHP.C`$ZS<) "kS 9hWDGP[,CQ4HO1𨌠RD&e&92'B@\:7RpRNucmԯϜUfk2Qȋ쳧ӗvb?{U|jt8BGDּ&0!:\CEzOA'T'̥֞+CXD_tmM NU+2HH$9ztoCVIo+&F^C9 !{f4M8-\W5s)0@':!Z{(_>{hrѨgw$ uꂒ3P+N/DP1#h0; )Uàh_K(Ū].6:DɁ}6TYXnpΏ2ۥ }T~C? Wd O ZB]r1MQhRF(*mb FRZF[DWVy>ɕOq5TdĩMt3{u-[`"[+zb*Ck08N&ac)9AkdblC#lh{[_twuS)i-}:T|BaH< !8hcv+0eH ~: S)T>-7=ϹWpgy z^/=鞓>ߌo?!b~]_7g~M@̋Wٿ}W-(3BsV]lBitp' S3jvgUEEH!R>>Ĭ E,WlCp)}N[3缑nj_vG?khMIJlx&`<h#iDBkMH%Rƶ(™-u!?k?e1XL)X+(%j8 5_Cqϭo6O"H:**/?>ava@*^V_h'"V7Ք6NIm#[Rd(Sl ٚ <yT|e ω*9qhTkl8(6*|aq/ƾ_xR_͙}xKӯӸPh:{;YwM( F,pZD웂ujzgzlE  1*duh0LVd s;vE "p{0y.^8k^[^{@;;N&dF@2(ZzP_LV{oq*Xi@fȌ Jʖ}MʂG%E}C*:Ȥ:[f{R,\vw3*TG6Z(V|-۽gu8Ξ/M<=b# ҖgRIe@%IDa@ - Z##Թpp ?cPht1Q`䪞nzGW>Y{q |Gb4ޛxM0)B 0U)XrL`rfJO%h:Ex8: g<[)C`qž[Ƙ XϖlE26EZAVu*&F"f?}ob?Z @Qkf7 8;|R '^ xqз7{o=^&bk[ [+,ʬ0IL:K+X@ ̞8LK" ƿߍS9) ixz^ӹgdTU^k](.TF_m7s˟.O ~W)w7[r/k[Ow.%ZFשׁ`X_p߷~^]ﯛ>_~z9z1p6=:>]N+\_tryud%{Ws#c'|'|;Qbym) B9΄|35u?5 zvvHZwhdǷ?xMgIy]幻:=utcb.@=f =?ȝy 80G,ҼLlQ00Aku0_dbcL^xLM}[}>%}֝qanl4!8lsYv<׭%'=$|,C u6WaN(W]|T&J),B6C3FŎe/hj#<nHYD.\3Ρ`҉*=gZR*Sb##x8|kۮܝ*tc*m#z;U{\R}+PjտO:TR?Q%5[s5+~5t#T{=b#2 !RcApMcV8;Wql,cL7LVfJYPrB1s;%T"`ugPؘap?'C)R^&Jv)6eyYyVBf Mw(B[кM`/ήit\};Ol?U,1EY8sJS#X-I9OBƻEhj5*"͸)&2] Cm:!d5;FYk#3g$c屏FMHӴ[ƢvN6 KՊXKᨔJ"\CD Z^2 s00DmnXF;QF &w]œd3R@LI\ V{<_~:]'6h 6yrgw;N޳w+f؞9l=ƜҚט\xoc{woG(CK?[Nߟx: 5Wܭv]ؾƅ'wI"˸s_7ٺ D+m;C`xW`>iI}4uO @-o#NTmqvҁbmHKΏ#jlrGn{.omչDU˜uV W%:4_x]ťq6F*MHڇE5&꼯$mvRIF9U;8oqFzE~'PCs󚄝/(M5Fe@ ӟ2y꣼sft!wktʇӎ4(3=)ye|GRF0U a@A@rR,3DI:['Ș$^8Y-›12uBβR)1q g=iTav>^.Ҫ>܅wD֩P}_˽ kk99Q۩A+k.RN(d#/yij-ٿHM7)p5+'2vO٭-x4q<3e&\^[z}>GvuL1;Xv88DOYzX>yՃV=qezlIK?MäTn!jӥ =&0+kjʝ&zpb Lbo \5iujR:ygW$Wvͻ\A\n_ Ǟኄp {W KV+p)$\=R9O  \Y}&-]+Hw,+dv^쓋pOg M%N󗘨^IovZ-s^0` M\.i/&ybpZζ%w߼zVemOYXDԭ%o5.IY x]&: ۭL9w05!}\ZcWUQi; uY8fn"o`U6 "vnTL+9F9ip{4=ނ7 ZO 7pYTn٫{c+fM [|1䤍HNpo(C7AgW}vv8۵ϯ$ q}}9MIwqy6Y}֒nPlCnH/ R8wffϮ1[ox'&=T'U]Zl],{2$U\ԪvsXU7(9;4G \Ky|7;M l5-v%<|ܢmїVcm)hdF4^ x6VX9:b$h}tS(CZ,۔QƅJYdIՉlCGX;C-bd?Q+%fҺRB)Pł#K_J*K"R,CM.*;(m;N{顊>|:ҥZI9*m]5E;ʪѱ5))Bzji{e ]Ol X YMscRFG%QCeg{[pFK(3yRY}CAJtlkO;&M*C; JXT@_g,sL 2:Z ,lMX~%ԦBz-@Yㅧ AC/'!O,qHR%1Қtk(c0c$l۱8ȅb_7EͪtJVB  2cE՞E Wl!}7Jz_JZ(͘ŽqfJaaOJ=_L:%e0Dle U43,Hؐ.QgV Y\wP$Bri-آS _F\q !aِSXrE #5+(_d8g9xVZ]E^h70(%<DTf X4 k/UXvk A1J$bB W%Xj )F@)^8 Ueef[&P `m@][w ΍dA X?ybU9oӮNO R,.CsDHx``f9zm|aÝ`T߉(8vmچf8kѬu^kuC[8-BPGhhFmd#:{/'"8`ms7XB;p}X:ɛbQޡ[ a‰FP8'eQ#Sb$4`eͰZT1ah\g|'i\1>5rNwȍ} ޷rynfXQYA|+>ښ@Cm}HM ׾ܲ[ǝJC7L_nRˍ7!~,V0 ֌J kPXיa2c΀x1(Bf p#6PG֞ݰScЪfq4eAڦEtlLzdVm!T$ 4כ %저W 5E9ϽT5n 1BPQB"49`Cū^v-nszz@LCM1*y`ՠz$"e0K>8}kߗD"U bz"VJ WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\= WP 1\6p 4\ɬoPÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjzR3UCΆ+P 0,pJjZ sjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WOjBJS9@Kf+PrRճ1\- WCVÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjRÕp+5\J WjzeD$.|"+zGCa-|λnuio8heMs܀ii!6 766ٴ4Ja^azT\ p.Gvȕ:9s pɆRJh.WB`\-G  +YS\ s+bpr% 7)$W z,\ϥȕ;wdT(W읥츜JpS1 hSϴ.WBI^jr%N +WܕT\6?wʵ!r ʎ6qt,kN?cT` Oh'#A NO0Pׇ˼޽?xyD]q~ԮtTCx_{էV/|曃oGggՊƽZs5xyHw~ׇ!:*o[SQJUk`iųp-d )5Bg'q_%DŒT\ .S)r%q~2;U&\ҽ,S\ n9M]+ڲ酡ݻ\Mar5 79DKfO4J754AH]o9 +ܜJq?wJr*W +P\ pȕ#WvٕPzr@ッ +\a{NK+ݗuL*W_E(]Arm1r%+ ָgWB4Z\qsArŁ+'\*F6˕Pr@'K*Ӓ\vȕ8wʵUjArY\/7'vIop7Eˮ=dOfje-JaJky]=~Uhhs-pb~b$ubxqݸedSy\wC%C:Sڵ4H{v}y<$V&Wkbve6-$xrɹlۦA"o*GMg'F-0TLE#>Rm-:Ċ&EO_ Y<$_uk'.W &\-Pr і$l+&[-/e:_|䊷z~ad/͞ciiK'}-͞FgV bvM +ξ.*E֙˕Pzr@r)/4 rJpxhm}v%YjrS *W&>yxr%.WB{Aǧoh K9╋_~_ŗg(?E|KOuvCSQe9FT3e~}#p^ vM|Ԏ+_/_8_`h:q̸m Wm&ˑMN(Mdk{bLЭGߊn] ~a%IӳFӫޭ?W۵7o[#75^޹w[ph6q.2pr}sӤO٧:Cq ~!s~Nqiwn:53C 5i\O"7g89\Sۃo[R8 [f4 ի_ik/?˟&vWm.L4Nlm1XJj'Jg\(hjԜ!G}q}џ~8cd12PlVmOOtW[xuyO_~">1_KZn}j<ȷwg׆ 71|wtaCkc^Gu¿wN?ޟun}_ENkh8ᘗx! =CxK驏+\6WΌUlI4B2ٰy(}fXKltӒ_Zd;N p*g p|;K幧SBI:ӹt*X֞o,W-7qB)7l8:Q>;(o2۩!8h/f!&=`eɵm.yddigM=>)w-s&}<^nZcbIovz h?1-$|{3ۏ!=Tq1tӒwFԃx{u޿9Z]z)YO%()#G?|$(;‹71|sӒw32|mA0̓}o J&y[?WZᢱw3P~MNoc&}i)&̦‚ka sJbZT>0 u'$ԁggX t_!xgSkw9g7=]̨b^'J8pR]0|~z\ȩ".@Vx;6rLtAc]|iC t$g$)Έ_,r 85޶ r`([_{Op%BUQF]|`ز c9 gl>Ppl<6kIqnF*\=\M1r%K+>J(]VZ\9 +p/E@kJ()\-P,/pȕ&*E@M\ #y}(H *7RJh3]@QZ\Q6kS3PCʑaSBJaZ.Be8*rObJp].EJ(#\-Qk{>ep^~}S=ˉZ.v|Ĩy-j'P^桧ĵzKCEUMFkZr ]zO{)AvoJy0Ρu7T\]%GH6~5V4N&ST4BKyPôYNEٺJ Z%mx v+5L1 }1rl\)r%n󑫸eH[i. }(̊8AC2Y 7ȕP\7wJgT(WCf*HGcl1r%~@4Z}v%k&)ȕRIr`rAMF^jrEGc +q{)rf] %C3x4TNv%+В˕Pr@ !S()0\\ ϥȕІ˕P&-)Wm2nπ<5&1Xp3R18 lVgt fCȕк#W\\ mLs+̬r@r)MCb qhDfO*W +-P\ p('TLv`J(]TZ\Q&LAr%{mnR ~J(WZ\q m*˶arJbͮ(W!]1:Bc;-iZi^A dDyv{Lim#!3߰ca1Tإr拁s*-ߍ%gҊfM2dZ<`KLn(n]-?!njj+X`,ӕAٱ垮ڡ+„4 "L"+PʠetePr++ 0 \Lte2u2(;vlOWT]RCWO4C+QkW%kW8J kN%42h:]Lt JpyHv5 \B+Vt Jzzt a8kx,1;,Ԗ;~ s" eI\JbY)O^dd]Wr8wYx)U!`0$$Ơ_2(u_/~J y0te` #ӵC)QJ %F`E¡+eЊηfG{ pM$1j`Nő.?)bh呚P ֱ^GlAWCU%DW! ]BW-]+R^!]ڽ Օ0a42p|Z:OW%=]B K΃+WP *dР$WHWL+̩ \.B+VӕA{ztŕƔDW%Gl-|2hPRkU /~ 8)LP3ˡpkkԽ;DlUCo"V _~-",i{<߮2(ܯEk!TbFr΄q#ɉDTV<#-üȔi\ʭ̇a~)q(Aҏr~Nn MF򭻫,_v\kfi5%Ѿ^wb2bp?&ptgxzU+n`9-XHw> rOfOIR"D'd"?9%Peu<(m NnH7/C d<$ᷢ O{20g 'ADeˣ Y^7EMR"_ fm9o+4>IG^dpͱDU`et2Oa~/-}Ϯo*|?-F쟓 0K9o/}\"B4&8՗ެ>哫)<;E& c^LFI)E"q1ѓXam)AR7t@Xet8/?iVڃ'G(8/!ۋⲞTXihgn%-cq<"U[$f L㯣I(UcRک_ cY ڶd农feҨIs ϗ8>sexAr?5h՜|LQr3QsQ熣 GW9jtv5%*焙_ Q24H \$YWSB4S,Keo>VFz쯘 ׂ01\_-ϤX0N`2~p0U 3+x.nb5RܦeeYk9mZVJ֜JeqxIHuxkf) .^):vd% G$J.p0I&F0Z';:+bjꙶJ9kYjs\xӣ54PsyC~U Uy0%D6rTSUsΕ`җBLTLQuحHVV9DZ*2U$ sR)V ʒ<%8*ZR$H4E%Is1-4shR~蚶O:\I٭RLܨ, ŋ˳m<;OMvv+ZȋKrNYhY!r$"iQ)ǢuԇONr~L۝>5$郱 VH4'$K9RQ.PQ38 xcO"WgaãJ8+5T>cÃ;J F%Q'RsT gv=<2; QG>AuM_˚z@l,AT.е|uPVk' `L&gYI5S;bR\R^-ˈ>GtoԼ"w1k}uJVy{ Y]ʭ8rpj,//eW)Db)wXiJZTulEB:Qg:+-P͑L1G99CU9t@(ۋ6CHdrF !$q8(>3De |>,o:;ÁsEs/5tm:aKq^6^K`V\؝oTj$w<(!7"tHLn%_sy)Q}~Y"„bҧ!ҬL %s䢺񽞪 ۻ~՞u2O.h Q3 $?U=i~VpN`%1#F\e+ޛptV)qB 씬uPԸWgZVfGWsC5Y\w;i[E &%Lc+ur!VJaVZ Dk<u5Ka8\Nkn,% H˩D#Ҙd'hn7bKu> 0S2g 15oRRyZJm"uz[0 Vpw+8m<]L"TO1XNo̖/mI@A1\sgx:+~W2_f"JtpNh} T >)}s oy[.x&yۈиlb6E1"I)=Hp^9#$ZNf݊ب°{8H"W=θӻw/u97Kv1ѧ{^.$::϶[{ |'ƲY9L}2m&N>Di&~z+1#|WFv3e" R,K |e4$s'ܒ2ٔW6$j 9]2Z>ߨqO4 %KtBE|3ZT[.?;AHyfOZcv6Ҙ:,uK{?lipof=㙛Hu*"-ϷN")}z^o L {Z|rJ.̋hZ$24:nE6SNNS!DژM;B=ҀX_mNgnۉ+mׄ롎A)o ni6IBbV7Z^#.#~WJ趁7j*Z8BJ(W\Hy1"= v1w4cqL}?3{C>qI95sƙg>om@Nc3|qB`n 8o\e .ݽK Q'ֽ(0 (r۝Q ܜF*#ϿI**eT[iۜh$(9M7'aDku&Mbm/%mHz [Mxtϙ[3ӯ)d#>h\97aeI)~=v БM ^vb4xr'S$'M nyT xL&ez_ F_ѼJ$}%v4sߩsN-˧_Q¸J s ]=ԏ>lDqUZ `93sfq|u:wCĵbޕr}2oM!SA/cm&ק[mJL9 s|.?VuEbV|.Rv>:\B23R]L`FPfr LX)~$$ﺍ ´ P7,)o}+9h~׳TC?9‰mϘ'Ž gvp;,+{`?XxYڔ a0d e~E\.P~Y& .%HjjxfȢ,wN&jympWnl|AI`zmt&p 9㰛ʞ3k +~=) 9'=){th03Φɉ $ݕ K/-k( pQFS? KkĂD*rw"t_- 8t <]d>!t#AU PNm!ZUKO`Kw/s׀xݯҘ / ['AӽiOO2pSY,2X'wZH7]f7?WS I> 6Z`'Z(P ̴ 5qAZHXSKN9d-I]}8eHtu6)P̆+8g~/S(mrg>sY1^s9{gnk}_叿/y?3lN75׫չC<4H- B&m 7uCokf'{50oeLȵi6@!#D T}#z7]ZrC^pc ګ6*lNUJpD2wc T#(iz5JFhPC 0ceA4B`Ɖ`,g@--J!@:1y'@joo"v{HL\YV+K- ڛKHΔaPC ]Vx+bVpIG=i.Ll=Oō3!Vk\%0|\'uDBtwup>i<3دk1/RFݍ;ǎF^2:qw t-36'߭:$F_܉JP#Z2˱X/νa`Nb$N{5߶R=U`!rj1hP/ (72 _x EsOR'B:e ]V[l3G{ }-a=Fc D$^!&@A@'BD2Ճ&e(SJ=V!r̼TRp=-iF!$|6`@UCc,pu`14{M% N3+' ͊"@F`sQ"2ƙ54*7ۉU tbO 9 1D )KXQ8Y_,yX|!}7ʢ42Iȹ򶛖()8GzOWqPC]"K+Ő=l2ɧ`7Ԗ !HjPz v`uezQj}ĴNS8[Qji!uvFT.QЃ"/8zcwҀG}&ϨjsUh"}xĠ"r8 1SQ 0Ekh\ULcvFGXWd3&5fGrۡN3SE\QYݯ?2vĖ_ܜE% \H8& PBɹ "5YfS4DMw5rɆjt%Z帡{rxE=)C1c1Ae''_"3uRS9HqLLd~ҕǔ9C*&IV(հؚ w}{lEc}z\o2sT\ 84i1(G5e!jԻE8b^:c "_QWtq\-?RCA& $̋Pc454-bԮB_gIBGz*i`˞R?7f]6 j9iǢA*{_p.&袵dCqXK^-m7JK9-}ʛͧe-2{7Ƚ-(eN{t^zKsDu@w˞Zϖ\zE&Ly[„bm'yK`M2Fg]h8oJHs&: $ XS䔵Cf>PNH r^s)msCRJ&A2i$Xbhp+-*2Xdejqr/OBY{'QcH›^mt; EzU,7s%6WIg$ڎq1Ŋ ̨`rw> Ƈm#Q*,[hN19Q$N8lF2Z0A`O !L UL%'|&W>g~>la*fWi@D88b|@xSQ$[$\cTFʓE[|'Ӥ<;#DSXp}E'(K~gԕ%࠰u֞!0;\Z3L0 Ղ'] 6e<"򰌡.sG« m&)/X֎ӵ' . SʅWvJS2L[M(44$h %>$$>?N 7C9o=L o_W2ājQwgn^?fL: &9Uܱ)LP%y/ [sʧ!Ub&b#wz56zB I]-ap{_ȅ`qR(]H(!kߦ>vA)0aֽQ]/ɝj !d-'8˔scy# fB L &²rן_ʬ"BZoyAIMcsӂ.;2! MZvS!1*' xyoi0yA#dR?@W'V-Q1S-Q kbqAlƳ@یlTTu?DJ]+~C7J.4cou Ia5-Rn {lҕUTe!.Yi[uxyImD2h9K If!Gԩr$d5|3yF28l9*M!22 (sB¨dT~d谂f hL@J;$dWb/Nǫ"e=[|koiPEj +V vU+Af Q-ЂC5;hjh4WZmULnglKO'esz~NH^*'1޹PhLA`{zTZoi0HCve2vi׭J4Z(LԀF 9aX{» S 9+ aN,tGkIw, 6/$H=_N>R߃?3.pŮdwQʜ^0Q@<RbB5a$~/l`Kd)mDW5bG54cI(s..c.ʠ@:u\g؛Gɼߗjw'g|a޴$ez9(Ȩ-X6r:Y.) I% (0@X9F7꫺݈bW> ٨ {ўF{v{b1,rۗ?/sgcoF9oz  9}؇kP]"r`!.bN0J S8V7]Ȗ8~,q@o[/:|хalYR{B]U{;ND^>y)!@)/?92Pfza;iG:{fwg;ݴ8l &+7g >dPv9H\;2 86;oMmΤG*Ҩ%zt`Ā22N( d‘m[ usbK5YrJKl*eUwq\).*$ XT (zl|6)N2'!4 &䎈ȰRf-ŨOL죌!huH[KJAxl}Ǚ~א=[ʷw2Eo}LgĻ42A~$.{M1V~jw1:G;G1 oOR55ѯSBF2Nvgڮj._n>?p"N$NdZpI A0⢚g0WHۣ8~($fENYi7)v7N{/Lg4S"Qշohy %X]b#O>!g,IHK"SbMX^iO:)Yh8UA͒&rٝw~L{gFvh2p; Xp ɖs"=i@L3~vhAe0󝴻Qumu^f/ﯸm' +EI<ýysGFrGZh @U:2eIylSs,y?N{_b6E u[rn#+vŭ"#7'=\>TVv[ͪs|nOۮ_[h[[hɉ4=Q2<УL` xը0YR!^+2>iގǪ2Jݫ)蓅ޮïRu1k J2j@+d<(H [-dO)Z֫mMeʹ9z?|-W!|O5Ri᥻@Z zB"{22~vӝ虝9CmNn<(Z: 흚Txf!f֮jsjO\ZæJ+iz LƄ&3KibӔa'A279VsTLG/3k1qޘI4stzl2ە|9qrBưά#w;qwU_-=enRQI( ĽTQUp;=sݗTRb0#% ( OKf2~); ڀè8],]7ϕޞӬ-gI wB yOGJ\PJ':ް(^؟V$Ủ^)o> \}C6ozY+6]i{J٩jZF 5s!=0:D0DLCzfwPhqݸ=k9JN+iPq/j95폼}4Č{t~>gQ?S?P)8kkZ#P1watSF~{1َuцV,?o\3ӭ:کg=a_=*֣. 2J=EZR(wdjvt(0x{VA/_Wa|$ׇuQf e5ʛOCw \ ;pJYqy=VN|&j!$#_Oϫ3{ޮw_ޘ+t9> d8V&;@~_;,p;-q9O_z exS~I<2J_dY9 S]_ܨ t&37Qeޘ憯@Rr w82q'{x `2m!xÇb?v^z}c0cxzxonb2@yWm`eC_N @SD @ ;P*H"R< %}G͔K9zkZowچDy:=O%ջpD1`$'HCP<4 ɴ%2O6!Qͦw=_>>I#nJWYfJ(uH(K$B٭:"*a4ؗ/j1 eY/O."+yv 1M`[Ϯ'%)ːJX,e 0$U !W띶yFj`\4EJ=uAz#]N~ Bđ1!&00nCY v( ufo\- tqN@v0/Pϻ$04X'FEeyձعgnujd#_ųw+d݃1E˰߮oFklEAGaqwbÜ2{')^3NR&ד[Y} -t+/"ZȉqrU&)x=3}}.oI ?ETH>ߎhfE׷qqϷy`\s*;'$j(^D} /$ j>Nj\Q:U6 p>Ü1q`+5<gUP${e^K硪Y2kN9-Y5*ZXeioU&lFsۍTq$8yR7G_"@)Zv.oW +i98M0oaǓ9]~d?FکcL0s*h$9i= qDC?K~qDgɪKPLL6 .vBFxrFRU]W y>:z7d1E` (x{,Ե}B#[Oi3'{hq_a^rzKKjU\9ATV\=0U!sg BlNro %URj`p=;ͥqT!Rk0Ũⷿtb1A 4RugEyVư4Pm:(yBKQPd׹ ]f`2Y/b5>j tu彷?IŴ7rb<7[mh`E^Nc"˨*ќ~))?K9f|Lyh:hzВ5=z;nZc5pwp62 0׀qi}#!+6ƴY>vJ`়h(k'-Н|wJΫ74u))Ey a]M+FnnP4ΔoQcfQP'Q|gGz| `$ (Ͳ&FP1j>3+Sd[#R`$ ;)kRbGS^S8V;,+"_( ~z;B*ّMpB(yOI(Yǵd= /hO6q~嵂]*W( eȥZeOi5>%FrZMק|XS0Fv5e_w~{o"9,V֨D_[ُ/̮lax~o($}KFk&8d4~Wyd(M;m0@`I `h^h2i 9hXg>ŖF$%x6^ãqZ+u`(BNx T@SW؍uh/ކ͙=~ B :cWRmXx'Qc+׼$N 9qm }qVA+A@aƦuH..2j9j_sc&{CZG.UrF>MW`<حSRVc^7CUcl65\W?88*Lnw7|s(5\SS̙411j >}m,.k]A̟(QWXkӡA Ѯj ~iz<`L[KB(_bG"gΩ ;$igyc<@h.EA_K{:}m#'r2 F uڜ3ÎOEyyyn &5h<5ofux{g` _ṅZ (UBbI5IBPLXɔTdcJAx7YEPi &Oj) 6מA9}uGM( A0VF8Z2R!cvρNBLg"rן4QQBP)R4&vDqLB@DL4B)?J:a/%vPe3봺]`Jg, ;Vq(!6v# 52hR!׎;`.4W3)H{ZCfŢ@L{,]uh;YoQa""DN0FB1 knDD*d}6@h 6 OslE d(Q ?OY0|&gXjl,WO,yϒȜ&Zӳן3ƌL8222 '0q v`?kbqqZ/.ϛSAj{(b\äBvqhX[a.뀡i;#O]gTHbMaĈ0Ab(+^2.ȺW}/*]Eg5Xyue:vǩ]JN4Jx6]|ZRIf+RFEHjk#MYB9jFc9Uؿ:ZivMV\<r僞ݵh @oS55F4v:C(5ple#0ALSb.H;:G # };cE!g,.]z) HTip I ! Q,4~ˊWE@{]zd jjY?sBZYvR>qʡ4(#2m aV1r8S؛wm.mQ ִE۞=.mKU, qó; @=>.4coiߔ6(.H!UL1[ڶZJ <'ts"Dn-~6ibՓ<6Ie|= ..*|(`d@eK- 9_Gl}W;DB2fh?(wqcLOJ(.2ΌگFUܝ9SMmVWMJ+XEV98{vxWkЭPh򵺛Mj;1DNxSt }a$Shwkб/޻bfz:6@RibocFZx*pNF/EU vxowMBKホ &ƙֱCP5 sBs'5] N ,\xg5+GU9v3^pt۽*TU@Mc\9Py%\~rk'Hj V㖓oRk=D^K14P-ޓxpv| 36v#) aa! S7JQ)lY1.yf=}}w۽4\SݖW#C%,(x#geeUK4+)%ʪUq2ֺ+)xdjg@h Zf9::o`'v+u̔6W(iV.;o x4ޠz:ҴRIĭ6AILʅ%YzbBf6}Ƽ2׾hI?>Cjޕ^x8DcR&h(+.))n*E*㒸J6&BqʇQ}\C썪 JxCEOEnt*esW=:w8CYP$ LPo#V*8FЌ\p\ eq;9r 8^ ??/<*g1ٍX٢;qOI>uw?I9\TxeI5+P?a:F-6/CG]i4"ogW^g *D7#n.l::S$̿3h%7%+3J~)x']Ri4)o&߯fuDnͮm+cf+x};Rw _O'cPbDٹSuڻI{zMsm8qy_i:<~sOsr M}k8_7ٵpNjOri gM*p?r槤.: {" ;әJ)gˬB*y;(b>A3MQvچgO>Zص UUxBL#\0GtћOʖ2^WxJzO|z]"}ۡii8<=)3(~xJbJ8%wxk]gg~_ubcv0h# J|{EdgMPC5̙k׭opŶjp!oo]dy g9&p)3NHUjd#CƐ_ *2x m.1pQ"=8ßBϪ{0 8)Et B\W+6*269S]A)P/}Sٺ mG;[a ߍ%NЧpʔ<)=}+u) h*Z"%G;Jj껣"pt&M"/˝PSP"LqMi Z\WL^JRZ+UzbBBbv=ޅm]&i[;r=1gxp |㟡K̘1v5ߤ7W:yGG|7A'f0sܥOYQ(c)%αy` ȟ F N)eKϰe-Dw*n\o"i U٭PGx69{Х0EI hrYz.(ʀ[wcK q%Ug8}ZD ywٌ83l`:IYzc1G s"&} B%XtX R2xGmDqw~V gr,Ƞ"A #)D֙F)ua9oOAB> x|qO٣8g]{MAȖ}dCGWR먌1+ d,!{~_GlQ>jŸĴ@#V{ۤ{kPxd (nzlk ?;н|7쎂s@m(Wcgw4 2cŵ5Bb>czN/9 akXIs x.';0 sr:-Sx` %qUze#}qnc6>"/$O]ŷCƹ]~X٩spP9i`?> Rf) r r#,`k7ݟ$%qJ )ZbzZ%*?JZbUms@c&&0(cY"nH$0$F>!hO*#@ !GnokW%Z]H^J-UxtDgVQ"m U`[,V%QvUdaYymkoԵ/L491vu5^k=#WK KyG<-JXDL, vz끵3e'(jXpzJ `DcXǒ{_/3OHUZѧJԆjH<BbX{;ޞ]BFƉnm{] `4:2"Nu.,&Ohᠨhˉ!j2n>& CYK'M7kcWvYμN=sZ :p| /@\rQ,s B$x0 !2 r&lĸ5/Bk s{re X7ٕ05]_v>0%ԟCod2nql|O9L Bi`9Դ#jTpHv&kO2Uj09&)֖ֈvY iws (IxCfoz Pu߄D6[(6OW.mcq4H꼷D"kfH^Xu)10D`]/u+xw!@MtB󞓉b:+50dvlumKb4s*؈w!Ï`HQB UʕD>D׮N(275mQȀx5УKO((!؀&^}oح~sLS)eŸh VZJ,N ]Lu%Op]< c*,L(ga+;o.|}ygBViX/x\-7yY_faWn6U`JZax%YSJ΃։af=[4b߉>VVO CA$qeF)9=p1*@Ū7 ,Z@C"{xEP%b{Z$`mT?" EPO2}# w-nd^k oڲbpՄag_=!t%K˒L_ZS83:X3ާݐ5v]~cR(i&Mp$SL;328Xڦdڽ>La!82vUѸ*Bii6՜PTpK]27!AJLwn!s2 ا$Ԧ Y#M]2^Q}dfs'xySo_~Lo&3OyG2~j s 2հ%V}fEh4%hS &L'itASEƠC.YL=A&FQٲ Z:+QGlhuh/Z7`hl[G¿|ȫB}dugNΏG.`_mbeGBQW }iŠDp*aA7~;=0}).c*>>c<k2(f!QB 3?ڒ |7;CΕZ_y2;,dށ&L5_qPX QT!+/ʩWwXw+ ]Yfp6 %|e5'%e9XcM"5zEȈP93,xxb*MHUcc2`WtVu::UΪ9]"Q8$K4ד^슿X4K ,z%ũf fKKw*؋Pdpk9 u#Cs'doS)}uO?U^!zj^!D5uC ~CƮ0DJ&jDA^raqV.N*+B[lZhskzh `]YrYЪTtQPov{[X1&ͥFG^2yS:MHkOKo&0XaYPn+4i );mQ]NzDboϤp\6lq/lPpC2a+&tj@l`Q +*Ȉ:gISG\!,j3d &RAH#N,Ct~LS{g8C[!zV@1yl!-x-<h*F &'O_[nYT< ga(\Yp褸]nR}pꃛTܤ惛$F g|I+:ZhUUVh1'0N!/& ^]\R Mї&s'GMc_A67l(zӅJ/"b {kj>ҽ@[Uo T%P[U 6 1nh%3L򢺱X]P#]Ђd!*>8r\Sk-5"5\-&Hk?p΋Ϣ;~j,6~>ه[aMTAo7t#M2)ʔ Sdž&h-g#{0nwuG`O+vz6oouhlm9gvpx0$r`[~6Ol93p)~I90yY}6ONؖ{!. ɬ<7LK2h0x fkEOಞRR✗P{#r} {IѦtκϜjQ4ShZFe B*2թ'1ddCߙ)E9.%&ג9,"$ҕ)Vù՜Z峵51>f[&hoq9 s[݈} h6'M%77Z$V0U *ab,ڔPgvSw]?'sW"2 gjϧ>wf/_}ȅU#/ !)gȤi BlA7΅/1xA?/ps%G.~͹6]#D61?)َ^DඏV#"–oI^K^s hrJ+p(*و-DT_Yc@)rsT<dPgm&%k>xxt*6_^O j r=&^T 7VzIQ9Ø?/?.yܴrΉ)Εfd Wָ0ʗ <۱% x+9k^_}^\_~&#Ĩ۝oq]">?o4c/+3 6=-l}b{y׃w570`v/a^?9bo&m*LL|r 1:3O;TAw1Y/ϘL֊ɘ r4XZ1%~D9BC sYcseeFmO}Y 'vjw,eM[ R,7}F[^V3Lk|Sg>\܉$l]3nQ 2%u1Boٰ'͆-3pB/RB@#ˉ>¾y/d/oGիpu{_~> Kk2cPZlʹX-+ cJdײ;$bPQmv77$J& .4v⇫u`1{sŧq, (8 s?BO,NaM. cvYfG/lN&CN>mAb\ղf-^}#򆈝ǻ~Bpd-7KaktщNK"&"p<"ٸM2}L-3KqS. 4+ןMqSs.g~[.^y7z~hEDcحZ#B`ʚ)XG.~w[s0av9{'F]Àz7s} fEZAa<VGx~01j}J6xZ?sQ/w䔼JnT 6 kRL2q28RHs%dD[+%LGn c RojZ̿F_}h7=vI&ё8:Q+e*Z O$0 s cE(|b!(&iׯFuf15GؿBmCiw^6b&zq"l<ڍMߓ?2} lY=tJGW3PddbұIvͰsLyz|vO$>"%GҁV&촐Yh!|37m:oVTbOZxQ}i:g>!5fѦ%۝R+9'×mMS\o|ad2zWJln 4rYV`;ݞYz6B V;)j:xNͼA*B+̰{iuUxC3΄ׇ71lfwf33ٝ٬wg*[L#yHqmk $Aga7Ҟ 0NI0^X&n` hC`,+ir޴\". Cx3ܛ[6E!*| oo2;s54wm;6^0QdTż~'o \x&fn`SA(wu]d_pe EsONyϱR78! 94ޙ:+I>o_a{^VԱDb(&.|u$]Gڐe2Á>Q7?k[wgS''/fzb{N=>{{~@ob[v9e6YN^})[t(HKeh_LEj^fpCKt Jux#eY{,dN3(`~4VsUlx -x? -Yn/:z+. \^=~2엫{_餙z<- ~WIwgeNg/ϗh~ij߿O/ϣWgϟ=sw}gz>9?`ɓzdsyo{=a:xͯÔ_ۿ_,B_ X='GB*yqT=%m^Ot w|ޟ`.قIJOv=qF^oyme%!3Z *ҥgg-%hjlQBn.jypij&9y̅3)+;QhF)Kb‭{LhOa4>έ!ߍ$.y0n+sb{tkH*=ۯ2[%I4# =i%T5މ$e]eBEJ(U MZ9XރkM9g5sg^Bn%e[&BCZ(棾eZBil[LYE \)'K0|N(#L;,`|qIpy1Ne ց%~%9bMKdyT&ogI;I2ƒ'K.W̺He+&!LlI/ٽreяMò=IQ&ŹduSS\#nrD$FT0wڕFm !۔J3ѝHdTߡo)z`;`-d(%=Jxa3CKۏ>n_^ȫ _d⋁b4-z⠋}F'mf΍\T,x^)u?7tE Z3kКYfNaU[ƹ7Vm?~n|ꗳF*l^xRjһu-5|d-DVh'4;YA΍,Of0g~\~8 I SI[=.KHLE)F#<1SI+㐲HꓤQ>wYZgYuk6 yyxv UlP6)-"q_E6hI̍+S)lMJJ>DCdhqkv4XS9%HR-H0ٮn`,Xrw?r^ؾ?{Z#WV,6R}rh%[/6W]}z]Fc>Րhe1lg+r8n~rnyZa3|̱dZEٓ D8B͔B8u5:K:[9Gq p=*c}m-ɾV?2v 7AcTp{oK6Js.RbO[TU![q\[62 qF0`t {=+_. \l6l.Vhu=׷7f|_F"XqC9kf3Z59D``1qͣ6XҪ\Ngzzbi4b"-ϓTo D}jSqo2ά;9B^8sym4CNj3CTԴOu??=k?6 etr+ԗbgvol`:Yv1cV+pPb@r)d4x|]bL ٷ3YR76xy*gj 2렚ilc6HDѱ>^Ȣ]yR^ ~YA/ۑI@jik@ޮ)uуE$?^4HSA_.bZ?BT5d^[qF Ődt$k/{.;m|n S UfJ7 b;b9Uug]mu4s _`RK~qÖC!/kp)CLo2Yyٵ}:T-=Ǥ5H܆3Ԋ0DytIB^ڝ`:mv695&.^|(u՟5 &8Ҥ`2V5ĩ$:7vc]Aa-$gWmϡv[%F2Mݖ[HJ7U W^! _r Ül6Vt)q J3R0)9jD:" H\aG϶vk7Ԭ$"Cl^vsNyP&576 t3At(2NgA;9dd2;'0SQ' h}JYa>M,qE<&A(x=3_ fb) At:|+NGkQOL^) }g.df[+m.e)ax[U5!D- 8jDdn}4E^m؆@-cjESv˂C["+o֢< ٻ&nWRybL}qf޶WIqq&~ґHGݶ$ATӣFI*=T4N9RBj21MB*A$Uv]54\D@<b}x@{bk`UN*;@ȩ9őڻQ8Y!xQ AB﷛߅K̦~FI\ =A*![-feۮN>-dAE@VF2Zr( "p\R[)VS=V{O0sV53b*_[=EnN 0:sx' cjyG$HUQWv)_( fWUEJ*h9Zko9Q 1QmacX}vR!lx4Vh}B;FȯyFh|?-b`ڭ|v K+B9 xbNFF ^n|l:Bڱ3c2(z VhW n .@B-1|t#nGEav+oBWh}B;F!Z/ŔCZrvDcSRoÙ?IɁMjsyL㛑9Ʒ vL'oG2܎.* ~r;)̿+{_uJeטMiIWJ7v׿tMeu6M'=SۧYͳq "򮓞K.A|PA-1$`JrbK~'fҝgr鴋Y>j5򾮑7lF5$ /ujFޔ|1'j}F^ʿG2PWlvFY֡]C;O8̼=%; sT#kW̍x-Wh@# s(ZBo!ȏXW`8wdwVpv+/%nvd ĜQh!XVhBI *vU ;J|xUrړΖj,&YrhNi{"7'yFAUZ.lt`0֨b;p1 R -Q|}wSߦX-W޻4W|,(?} W JΦCV@1M(m\{2PN GNFKջPh\T_.iKk**I݉Gt}ņY$A68TIDXA9*k,mܛױ}/VYyuM7mEkW*<6[/Gp퐗QPҕV*s3-![b36rÔOgmxPX$w\m&{md!ŠMȈV =zC>S"fFϕ.8N8ϱl+~֢|J*U.u!F;4}%}Gx<#|?aŰ^-% P±Gs*8l="YZ Υ@0T`f26n43VQR(ף,gUW s\WF1)@LVG@ugioL?]Awϫ- 65 ]L%⋢w~iVn.-A&bwa']/->гO`hRf i=A(s-s"1wpW[SDm j<B(cm{.a髸.{\ϲˑ ˅.n{KUo=QogP";14rSݞm[N"Ak B#guC Mb*&"R 5mJ9H޾x`G,DTTw'>Ыɥ0p=1sN^ܚ朋e+.%q\JV(QƦ_'0!m@"oj6b3g\w9ƈDF2VFI<R!:("]ԩ6w :{sB0 iIP;7sCTYMeC^F3KQNT5cetM1KKRbEO ^?qM_//>P]'Į*7Oj1^U6K7KDW`p+;JNyJq2Cn.t.Cվ_f\k`߱o{ {M)#[722y.ť[z zSaC*m'ȩ)-UPXjR`ep8n4v*I8R[{FÆ_bZ-ꭒ\^|8M7&0WJVd޽}}{.KNVѻnWA=X%&ϊtEۡJYjKWhRwJR0VF=5ze% !L6 =G^sNݫ)z W~ȋ1aB]0*'LG{BNhaA7 ez,k4*к^!dK6#]_.7?K֞˸yE65Co.ԓ%*q "2sh}|< ƘuEF6D CHq@ɉ9U{~ ru jpe|#^v߻~D+hRH.ڦxu}j(ÓfC<,߇r?P*H.hថSs@KGp 7H`d55d7J9n֠/dB,C40K$cUP085/],QHw1/D'߫"]~  Qq*XɁNDh*C7+b]>k-o*SQ;Z8''ؓ"z\R^ClpO\^'>!9`rd 8ETBr(=x"S$r%=T"d!/Ďٓݨ;ꖐsz`\wam]ezMDG8 ά]7%zg}P1*ajyoWQAj2 3>Ziمh{sZ=u\][c.ø7Χ֖ZDK EZV>xY^[DQOiMkNW&.*pBx^'^ce0"()y7*#ɾU=9R~M](lS ~tu/TDM 6R3p>i~y<1]0 #b-k=5?dG94nGUڋ6^-%ZkZs5BɌ]fC ۨU$NZРPvj#W(RuZjpɟܗabs=t>}=5;=mT G =5o7%@yF|^(T=D8ߎXe{v0@UcBWKFh<˫}ܳBVQKP|b!t.HD(/)'2n4F(ۗURtk`R? SFZ@6\ M-(cOs.JqN/;J יs!e{PǑ=}odSS[gz9v_1tv7e׍Ū$v 63ԭ=KlO29%ҖZn $3vUEH~$t+h=ʷ@(ﴵ8䪫-0[%GV.AV_^M:iak58 6@/{BARX{tdH[Yuċ5y#Qp2>bL'7?n,ZxU܊qS PۿI|J?ob4Qs,K`6r(7޻^>:,s#,LiBMx_QnBIyilR^{?隡@C }M[ؚ ^ܠO\zG;+ߎ#gMVӼ5E `f$Qy!MNb>~O[f4GnU_i[ !8`I#Cq<4wYkKN9N1;AhUQ)}HR.]wjփ#s_;u'R6DVQ`aW=jȤz$B&j\鈤Ē97Tb[ Y-lG_X ēͷ'ʍ QX^dKPF^6 U-'$ĥ_Tf\LZ:i,mϥ׺\ϛ̕LsǺ*@eݭ@t%jU5ۥߌQkWiFL)URuK؝&߆]LU MY&TZj+;zbX.(l{;,a`DDKz4(Zhzq}͈2aqRDB*I)l_@yq'[U,QEX%;b Mŝwv7]Hn'm봙eZXn83of*CE8Lg2X5b>qVFifsUW#<5NS$юљ*}(ɨժ4'u\  4`A9v(یm2*V7F4AJPV N[-?fmItg/}d;T6 H+S {䑠 \XԼpaCqO1jc.jgӎXm>[R/v-\o x| ;K钽vV*7y'j-[ 8gvRU0*KYuD/m2 B!F͂&,hAzkbOZю~3ܩ+{s\jvm_q+mp0 A&Ѻ=Nl<]2'T5 <&}Z'=ao]#˜Gҷ1|}}*'ek5 6jV83Q,Gp:s2tsj#WÓXc}MT˗b >dOV$u%&S/؈Vn1rFC4@kݰ_,iagZh,hb>o%aAlqnCfh YHHiP,6h.TƤZZ|sL 3olHqHB#}V)MS֐jvVhQ1'|=ɹbW]frpVy&Xcvio׋Ӻh£f1Yh 4^Jދ@NsUTCKŨ1(chQ&5Uһqu禔#+-]VxM8 bABё kΊizIHb` f=$q5𘭔"DAݢ K׳w>~R@dT5sNy&ɑFeղs)]h=3 _43qp˄FiE`灒!6R#0VR=:enX@KGPWSN~#*!+yhh7vQ%PtU"vV>4׎-9DRs;ԜEJ+յ]~3Z!}3p}p%M*'"ҮjH>]&?;usm(NzRWb< kp8ׇǿ$Fwa:*[Yœu\G/m2C!G&XulF;b~F@?߷١sowL@GVuWPz Dŭzy;4~uXOq9ܚG-+~؈Vc)ܲgJc@WFY[k8oL BQǺ[UTXݳ8Q ׇ=G}s 8O'X)_lEsK5G!3qt~//&F3olw1TM*I;3A7o|Í#n< O^$<F3Yl>?}?_Dd:sɏ;tryY}GGGc6aCd^>We/@x%11G?VUX4Z kZZ%I2LZXgx ZdlMΪEq|ܬo:ɔ4#x'!AiM/ɍˠ<1I2ƕIhH*A,QHT tn…scNR7}GU$/'%_+U9{~NcC!+6@ҵ?O,3VJ 8΄9+*m՛|Y|f)Q|oQ&wq#Oƿ.DE>ź}x|صTW9ad:b @A#[9Z5LL&egKMM/~$5g3#cL''oX3/=~e(X4έŢmM<+O E 3-Gq;s:zYݤ`Q8.!V0Ҷmo~D[-vcCH gBY%_M?|ruPkr3_|vI\Kcu20#gZ|d8H{'+yߒ>`wGyzR4d㼌$"C~ԏE+㭰 ӱzDzX֥Hr~Uf4s𗿗B~6DGq~+;:K?Dï$⧟e.fLrjiޅR'Ĉ=X+qQ?D5؂$evBl]%و#Hp}FK7 Y. tެa*Qn7|E;R؟݋Q+㭰KT{ nXy5ZЛzW7kh`$o,8Ap>XZzj;|5-Ӿq ;a0zUl+dz]EEִ1p')ܵPl'O)o?5`%\ l8: ($2*]4^jH#nWo*2&˚5լQ,.'w8{zŵT<&'DsL=?W`;{Eک(g#A1jF$r?sKm*"l~IpM}=?JuucEH[<=dL߆ ;//Ք坹" ;Dz9P{%M#3\ J p11U HmFO: *_8n$j4$3)h`Xd )P9gƽ Q}A\ijOS jmM4<9!ɂl]3Bf*m"76 O%8Nm_&,$Ӵ}>d@VG9\6{rއԸz__7SlM{M;fWmmxOMA_RZNh|7웯\4*7kϞO~yS/.PBAph?<ݡ~JkDuռNNޕs?ǩ=pئI_wRQ jyϜ♆۝HCSʵ&}0<&G#hpӌFhj O1cQ҉kT-'w>퍈Iqi9d W$"38X^Gߗw\^̾ǫQohǙ&4 HZ?4(3\a{S-(JótGTq5uMЖTF!  ZD/H\ mԪ[_<ݿ͆nf!ͺxuJ&(ߨ#VQ)eo6sFF$'4jt\:BBz|} /IRGFF83k JĘL4NlyR.qAzX$IC4Oԥ$F[XR;ki58K@И"A"q@&$rVGZ ǩ`xnlg5 &쎓EN*njf/6kƇc0N%:E KRK(Ƽ6AK`a6` 8X-TX"U2zh}}`T9dN\8AUP!wvOգXM6Qk} A rxApGT$&捫PIK O]*EiZ Tɣ7fl QCqbc%czk;> &A"2M=^*jWMkKeC+p==y)`2)"C)p;7P7LA ݂"f?c4(RX=?jgF- ќ{OAK2d-1O|O8^s"q*%RN)*d)0ʷ#1 &vxݰs®K˹^9Wm *9\uŽ3̝' RFJT%Z@oHA֐K^E"y`fbi~Ў"GCN6 4m4c!P hjA9'Az$gR=iHm;BJ#lGXO0мGzC0!)3f`GΜ U2A Uߙsڡ[pnX3Ɔ1퍢¥6UZ~5f^aT[tlVAm6)J@"IZi)} q˝sS˷a$Lζ! )"/M?8)PI[J-\[J>+eb3'`ȀmEE,KΔdP"$BTƠ>ia B`*)ʣ>ȐZfX/8lhEǘP6yW;N)Gq@DzusjaYa)RTncq0Z'4AA J.ym s(A9 3XB) J3 CHFAQi)Ġ;:[LCFȺ 0EGѽ[rM5)5ܝLZ$!."H#S+IaF0-9]p_یTF%)D3>|xK4[ WI0|n< X'sJ-8AU1TL.!cu6:pkcJ_jEhYD#$ }rCfqּ Y>Z&*p^TQWu|y'JA 4ə;{A!5O=>4ޅ 1>Z4Jׄ4A (•T;iE[ Ä9ҹ2(5U"S c Cш'o[c)y|]/'!qaP>p;S~^g;PjPa5U Eu)!琽si9:ayc. ejuїgՠ|FP'ZS%6ki . f',rx9\T9L#6 : Ks6$aڸ= >Q,aqBurNe/Yr†^bX)_4SrGDm‡95|-uONn?Ϡ#Ǩ9k)wm3KIл4Nc$hhV;{Z`'Um]8pm-l6{m@6kSAaq;벆L5dz6|͝kHv@."BOҺLŖԹoMs$ OT<'oJ}zJؔ|KTw~cdAm7oR| UlT rL }SroC *>1ēڳP2||nJS*ዠ^Å M68i- |?>|}qT2O 3ċ]v\{<^Ǻa\~{vr T.k4}w^هgQȋdPTA1d$9K{m=RZKJ>%#ƇZ2̲;f]oɣhNc!)QTxqo1@2rGcF:'4q''BCZĶho*ۧȨh㎡Zøb,I|j,PY  ?4ʴ6曾쓥gkr+H6; ~vW]|[ʭOcsm&8 1ؗ?j }?XԬ'FbB` ziQ׮ūv_~V80"6}mf띚Kcc`x̩5ܟ+lXGgxm5:vjVDq`fНVɚ&˒5Մ/͓!ϿV~}7~}7M6p{_ Hld@%Չ$)x)m#JO 5=Oy?^_yxkxhEi2ADx0(niΠ]${*p>؈G=~V^r9-]9ucZ=aoo" :pf*T2ub5u0;'kU,т҂^2.%e[6n-{%A%VӾ"+^x򗅚E K&K0t%D$QR"T%40}2N@\}Tԫ, Cvzk5mGX걵mGF ,AdGKr*/XRH&I$qf1%}C=0`pk"V(Y$My[[ Cr}dQD)[ggi# Vys&Z''K w/>%z9Ӌ+7}Xc`.*W/rQov8_䢭!D5Du-X7h?ehuNƓ+?>C-]"8|)HLX6#([BLq  ,hj9 ͯr}:r#ޓ*ߜzȥ4sGr\CE%P]E!l]]3q]Ea |,㸲)cޤ [jB}~&: jv&vY"VHծZz*]\}%@{4KR=5"ៀF*<9DH+`8͂u F!rU b!;Dix'smPL3 s#Xr+kpcCtC,Y q^qu㻿TGS|$ Ƅ徤$T(6J9= ¥ю9MSa:V*F:Oo/1#8`)^)r#q I@-QZ1G&ܛ '!0+`(V`adc?NV?fBKeeÏwp b&%:Yl:$vFbUrqŽ;QP!WϞt*q5_|<}2rC+_㟚#p:USuB<`Yo^oLEl>3a$ڱVDhT^<9? UP(N =O 9*vW!TI5Ӱ]sh3o6j6W;W+ߤYt60GՂ@#qǪϲwQ@Oj7݋̈߿eh}!xb ̭vKTVDu";'w'j8)ku,7>@gVG#j@gA,8ņ#1;ι՜B Y'I؛{0 HLj\4ހi=Oƍ>#&Ԝ$ٗz081R!7z3pd4rrn*!JL{@.ʵo~&;/yM'g݀AP y`< 2(P]Q;x𚃰&z\(F V]oMv߅\о(\gA;wyv^&Mڮ)5۞X~P8.Q5:Rn^*RKM6eua`o/։MsIg1I7f|m kwQ+CצOa+5 Q3 !"Tw5Iwe"S{TBAbոwM:oOmcBBG}]Jf!,!5!X11N@N ÁaD.>할?BUUpp"K9ь2& M48Ȇn !rOjLQ|WcjJ0-_4S-Stw*xHՈvB nlaM] #YaemX5Bog.TD~9JvxOбwC`z^LE]j|/NuF-+ˊ[¨vѓ' onj.<$g82S<A0KL~i xrN5drX?V` ɤnxtaP*9 (eJk&no먨v~T_*,EPtB?޲2)םcLdU4g7弚M'D+uTc8;]\ƵrOH 98qa7꣓:Z̍ 2~1&gD]2=(.G#pEM׎ ZMmu1]ĴߩL߸#o?NjFw1x0``\e_ o܁We/832rkHj[N=ɒٸqO9CoVDAEkD+ (> z-&5@" *+wMЉ]: W!fZͨz7*L7:(bbwE,BXˉb\Y Eok@UK;S*SO ,f~IDbr{d`d?y`,_{e@0ﷀo\!|V][biL= z|ڟD)jp|1Dߕǜ0.8PJ3\bmV11g+"P;vӣĠsyl\O,|bj2w=7~igПde [NFԊخ2[fWa'ŪTcDs_D?APaZ"Fk1cc٩N {1 hDﯓiON"=~4~7eր6DNOJ'd8WOf/wgiwb4B{.`~;~:W6cҗ'ecyW<6kr_/.ߎKXXxC߅χ^0?.zK)̋Koǣ 0սa0 %/oYi\ق ގk8H{L{.,[5_4 fN`jaفW'o"ml]|zaK`twt]1-o/F=|6W ؞X-A~Y0LQzT?}}`0=~ٳЋ'0+YxElǤ@+G6d<Q^h4ZHcP{ *uG⧞NxZjWG퀟-DS|&4:/h'..Goǯ ?y`w0>*2 +;L1wѳѻaagZ\N\ؚOUc&篬b'g= ζggJ-hέN0`^<]NlԫGcܶOQF'p8L>/.lMt!8 9Zzgt7I}XyiTzyM42Q4+Jf!a<(g6 sQZ͛ d|Bb$I}4Ҥ%-oI~"ؒԷO>l!8U Qp"c{Š@k?oI},tRj!A]EE3Y9dbv<.1ɛ{2ɖI!&!gvye-$IǰJ4 0ɽ>P:R Tr,dɆ40J %x}*d-\j=md%"[.rɖKn $}K:g$˱HƐƙ Tfhf91pZ0ժ5%)O%KQ4 W\ӟw]Tv]-܁K-lduL. I/Qhn2̘(h`3L%Ԛ뒩뒅Tʫ$˚lxb.^z2sz=dNz i\^\E@ YHGQ ,9 e4JD뼈qYQSa%Ȁ 0 #13<-sgy˝1+ʪ@:_4wJMfj6 V 1L&AL2DVd1(Es1àK7LLw?T0Xʪ=he_}?"&]KYɇXSq]Eҥ¬dG 9hkz+[^h ăt)|!.y*tHn YP|VKgۥv]*mJۥ,_j:) bC<(!2:wR܃uZ/UۏiΒ?(K0.mcNq-Ӡ lck4T+X3r'|fqf?2)*[x,4gS`*)#a\? B/ƍ1awi_@N1'jl~w/>fydzX,|<ϪgqY/b>X&5QV#Vk#\{ZG!(D.8?cSKCCz%K;Nnu':.n!vz •6ٙc]~?"8nӠBDM5e\[}8^'qs>,H%&1)IjL{`JT[ \`t4R9N)C5 3eX h>y~Rn, E!6Fjً%^U#eJhE2S 1,ϩ2øA; 0ɽ@ F۳hQtӢ`6"M+hQ=\բLp?ZYjiLP !{ѢjTZ-*C9ʃ6Z m 03ӗ:MJQN  4Ղ@ukhVkE5ԫdWZ4R+)+J+D3i)^I`8 ?5UǍ `14quDp1o DŽ͉^7Aj18y /tA0fWcVt(ezU/|wza!XpIpŻfJ"FQR \DѥBI(nx#In7H q9\;m>g?\ _-n(R!)^g(jHE:%B`ѳ')C>k= qSLl^V' qxbJ=%iL=0uIJ.PKj}]t{jpe#BMRӀ|cyT% $GI)RyMf [J~ ;pWg9ofNW4 G}T%)v ߥ|YR &2 O0; s/šF4jEVQ;VYۑ@T@Јƍ&"0Ja3Vh@iEZs@-ǘ=wZkohb V*Ty7E 4逌إRmRGDRQQT=M PM5x_~3ՊuwGǻyPV]uK "꺗^"l W`v8<ILu85C)Cbq>?e9WGՊƳe杪l;Xo(w=+6;_>OFSz!EN|-VQٍ;}Cd2$]XiMUW-.T=+q*Q1"J|J嵻٢vwB17ˍV)*aN(͌y QQ"FEYT>++Qqq"U{z< :_.S>y|/ѯ A1ўj5-.B^lhEq/yF!aS=D,-L%aI˞`%B{K\)ҹ- (\߀`rYcv~Y ._N s1mV\ŗt vySh=%h+qSd.sww*z`%7) <)-| ncBTj y*x7?$[jQzsfumU۴kŧ- ~bb08Q[TitFEˇqrA$91>lu* ;%4#s&k;VKlA +ݾ'89F^!18+ ]_ IlƁ9C#A 7X3B"9]ĄD"T{*c!Sh}@ҁ7z),FvA#+|[ S+tF+LV^KfSp4=7 &Cp`R#$,IIq]7T%$ U ޮU"eT%#䕪'IKVK~QޣtZ]WJ6WĊd,jՋݳ#v,Ur^]iĎ}xSa@y/x ̜[yJbݓ\p?1oQIa0K*>9V&ϲÁ?2!Pi q9KEWm|H*w AhABm`0 a[km8T j eaR<͵_Z<.ńǮ vE] +s4_ڭ8Ѓ0 AEvD%,EK!h ?Wxrweаk]!pԌ:hgi<DhKX  -X[VεcQrv~3=d+LZMl@#/=fH"ҦA;VVMf;ϝ mۗV%R?(IIpp*A= Qcc A! Tp}0LhEk#CGJ[1F rx ,ՔVzP Q aFl( Q&0JXch>Le"$  0ILɒ`JFL U} S?x3=%i]gJ*k=nb1k#ö䞚hVӊUX)εO 4G`Vj@P8sH0Q9mRXF"x!/p x(`$YZkEi~e5P:C!GX$ ™}Ly& N-QE!!ip$PB`;!d뮋=RHs{Jp{6 S}5Y{z01RCM#[ rxh"Ӊe A( IyRuI*/# 24K3QxsdQ2 ހxZB,}7Uʝ_jӂD!"dN^iS\|臀.%ZPe-TɥXQppè6-'QR| oXȑD?ra(IsZiҴ1,BƓc#6r](gnn?\ ^ ?LO 7->'-5фƅ<4:7<lކYL(ޑD(Q EmZ/y`4AECMf% `MTQUड&,BJ9Wh](QKaXjyjM(~ .BJ49 %KSzSRҞ<ljD'_Zh^)'@Ң4ljABX0\(iLQ~t/͒+lɣ~mǂY"V&f#)Epf!E 1FyC=gh0joU J VNcjv .t l49 =^t 07h6gUdDaY逆b\F` y#'F]#!Q;O Lp{m؄XЅxxҔ(prjFf(tx? Y`<ʶw&ᰩͧJT(OOSjLSXmIJ))h0w; ,C_͔ w ["⼤G-8EƒrOJ`LlV:y[O3 u82p~xbG"geCq-7hd,hdwH/ACi$6IlƸ<)}h1LgT־5P[>>]PL׋epS'ϋag|0{wS ШenfIچ>6MV>Z4{R9M!l_ 9G5!P>q׊zIDT=W-<麗!vq=vQCA;j8mE:΁nU%PNE;y$QBBd l}3&y!yEد/җT:s.!{/)r 7~ΙTr^3gTinTxd0 s>|INtLa)gI?V~5L@n cꁧ}M\~—#.7ZGd?0taDztv(p0x;M-#_?႟nVJ3 4%f~P#G67~o/G?yb _=oOvI5o3̋^5Llx3~t k@ӛz O<7m3_JE08tgp qk  (?("N ᢯#)TD2'TSyLX*y^2A`2ü ! []P/%cV7GL{7?ϼR3ZVLH7<ĭR| mgZe`H Ht@r\L q$  R&)wQnڙ_wdAω?kƸ)Y5!u^0NQ;J|[+];%R M``K-b(ձJOx"A<*>-ӌl)0#qSMP-[NjqnP>P^L|̡\vEck `99;Np?!pWNK;w嬜DG܍j;8ff ݄xGB3e&0r:0bQ*h8j3C8b+WZ9W 5;յJWiOPPc'A8R,?:,EDBpР;aN0fZ P FY(]q&5+G{b㊣(8U7a\_ 0OkhnjyĸJb|y6_.:7|q{dE-_Y*·܂egLiSr)(\ 2Z0KrD"\$Y/r~8{#G=*v*q0v{#7w35:?/^]ttalL*/CUkQ#7<"[st1{ `}{~O}|?_5%*%܈Du$:tфt7i$$$t vGlScWkćWu4 p25iw*\{7=`To6v$z3@`wi}'v?ѷ-e3Jmo5{;{?eq`}KvΪxgn mzyZ!O).ۑO!*6-Xn7F ɘm =CXuNr|Ζoy0-Kg0Qp `P3[IucqYW4*Ug7yطauurOΚ׉5c|_kΖGIZA`S'3>m'Zͧvy҉OIPxSyOub_~a ]{rzMzWj|>֭s˻Mz||tX|yp*0uyL0~?/-L /Վ|.5"\6 56P\_.We<8kCvnUn]Uo}3vf zn9"uW*y/Vv6v:Q?/S[%K?=j^gmK":l봒WIٔl6 Uq' 7Ÿڇyg͈ORM|YZ>8H{vON ܴy H9 H'ettkqf]#9qѮ2[c(%swSyR cmտWz7ԋ5(=wt__|Q'`%́#TtN8*AODetsp@ t9%M aϤ IE1t'2Y5HnS_] iohZaӦ\ޮ܄q~S7~wsjV_wwyͳo->_Hpc)kUn~?3\|v o}珟z5_o_wu$/W!Ȃo?.0gq'o};1W?!!)lon{kWY"7g)2^}P/ؙ[B)xGRh*kD&8eu3ht1义g#[$!^ oXE| y'?INe*neYp^.$ RΠgZ*6I;Fܲz)DAqʒ%, R24Lh!(&e/!;N]ZHLRs4S nU&l.,A!̅.Y$*h/|a5Ɉ9{Nj"KZ*Q rN )BW"0l$KNz0ԊT*cq_8ŜC4)s`,dPiUJʮXͳZEp U [&:qR!e⠦F؈Q ()8eM+lƬO6htY!'#+ rǃdu79-!cTt\@;&;w}Sv:zT0&d/X2@_F$] #t-'\V4KЌ!B#(#u*X5(gpZT |!UpPSe4 e麴j{PP!}c2^eJ@PSqVGPn*(o%x섈C!Ee0:Iյk)l-bR-4 G#c۠qQχ5P{dYCp|5pQQlY9YOp?=@w#얙x,_/Wy`6EEa|eʒr]Uk |0<{l2~v@Æ02ՕbOҩEoR!/"00V8&!!`gl7"R3@/QHr PA"D䈈iI!+C` Tε <&VYhX?+ yx` Ռd%$[+x# nAd066d2P?`u$tvYV@m3B^0Nw+&ؽΑ>;Bym)DNj`AX_]=$Z*n]6qٛˇ`f #& $k]߯H JaeY kz3 Fe1.(T I#]$QX800N30, IHXm dCx`jF"b;!QaJE8F|`gC1)X'vYњbgCvSɂ8}i %\ 3kBc"Ɯc1a9!~/Ҁ"$ f0u w=QQ{ ۨ8a(,qS͍+ATY-2mg 3/&ɓt>Wdb8(x0ԹjoF!F7E5 f]KX7$m8L ɃX%a(Il saʊ `30iyqmВwf 9n0(Cx 3"rgBa,x9hI;0Puv^2@@` e@F`H9zG +8>^&cylvq=΋@VNJZrjE VDa,` DRyG0-zRPKb[p_(&ԅE:z+ƀJ T J-ZX〱: *#G[k&^bz@ LZ*T$'p42nsP*v1Q?n >v(Hgi  'A@!u9$8 nQЂ"f"6WօPAL UyQ(9hcZw6,jV:r,On|(bvS8ǯa]׀,>,_/6րm }UԸVϩ PN˲ʲsNĴSwnְpw# + ҕDY=*+L8dUWViYɰ |춊z CCtr5 [˭x"ZpWvG{urr[tr\{p9|!;uXé /TPGT:U!pK>UTNUTYSrD_.`6t뾶Qxeb(%Uѻ< ^~_W7n"!}HJ &Z 5EI& ,Z)RfʯܭM dKѿoyצW.=_Н'2/O{ V PqQD E?ig\DfE2Yߛ?_/~\ܤJ2w>_X忓MŶZkz,^^Ox,=Oo;יK JɮЗ7~ 69|5\ws]Kܴ ٜjdfee0 ~R/RU˛tLt84+)ˋ@*tw&!B&T2 MYl毿#M4DVP1oB=_i67飦|gϧ`(]q_Yo>z+g?̺ξ/q)@M2e4-۳}/:tW4Q@e7zJ5QFH ^SgQ-Rg}CwH:k VP+݅|PNki{ ߪQq Ok0Xk1aW}8wbWy6jg: IʬxaL})cL(ry+,7v|F'[BۣjGk [\2KA X Ie@"-U4:g L(_ulF31*q1CPS?%ZfI:Q+ÉTǚPTa y1Grp/X+'c%F 3=+doS]/ޗSC,dYOܶ❣_yi%Ol \? ]\nзVx]Yb;&'=}5!bbGYԡzx:CtFPR 00(l>R*L2,I&+aњVP() »ROrFa&^4A*Q&M+睟7~;λv-fyÌwv@8IDhKdIѥF|mS÷OXW/;2K,djaTf6NzVJ_XWi8-t*3#)ё+L'JC-n3sä]w`.7WowqPb`Ioڛ^Z+Emxs3ex~r8꘭uqA}^Q%s :`mM8|35 DSyƷ.LwF$wmjjVW7WWW(&,5.P9) qXnՑ]`e9[jۙ)t}պu6{xI }ġR1tly{f*}?t ؼc D1o%7Kq `UUwmd?줮}SxD{5%G'sXV+q՜Ouvym2_ ]kyks?JV$BwfnT=~rAʅ@yH72qaق&(p]p ܜb̽mߛ"LXRLGh#x<T/'QSNbSًɣ5"c“ة$s;k)x܌JlM>mfi/Jd/GLzpŜqp3s5N>PQKo58]+6o $.:%q'[,+*_u]l|5&x$)5)QS%@˄kHlu:Es)2?1"H,MWck& A31 ֊/UlE>΄ĜV7 bXg`iׄkڃ7l6u>q., ZI5Jդ&bB4F2\7msxL3L\/us:(81KPЂXF)紕#T7t bF WO >T~F>?nZ$f=8Mߚ*~\(aO'\AE1ܧY P?Q_eǫYކV!% K6 ]( >̀P4G;2ɚɰӑp {=S>XfasyAl^u[K(㝍i aBmMjG^?9>!I/?a;45t%21 UXQ]N`Պ^&֐uDΚrș&ZSJHObgWi܄_ fr(ⶤЯK‹yrU ɍ !@pstH7D=>YG5Fo&4cV-)y5fp":+ 9ܵQ}>^oALg9x]t:CL!"d ml)(t9Ji_Gu3 zφӫYف>Bexv\v5,%y ED_b}SQz3: ~UbA$$ BBj-Sr+it.l k. -7[EdKX?sje&w5`B!z1vAw߄]'w68Vg!JWOz&|0Gs(Z0᣿kA)J^NDP[j"MC0$:oڧ{&$Z=R֦dl=na>m˙~Ab x4geI;[Mj4-GS<AbKTzb`o?[ܫD F)i.vE]{\Zצ69 s=}kFi1|ӟ0<'#? 8rN 0 [O} 0 D!MgwF3[hyט~‚! N;K`ũFg0J{ =ZUWuyhnڕ [̫ \ kRf򑵧:%Tʀg6x#~2cdM` GE($^޽ U̦Д4ptT<',p1ciDSBlѿ];ZP@=}7+NNP6< iò},c1if>xɪ5aEPO9dӬ8.&M'S4p=#?ZP!nS9mU@U>>m(]HQ%Va>Yi9) $>tqEUҀ1(dݤ *ۺNZr ÁКM !L3z6-eG+ %-|LsȓAsK|j|]xkEӴ>8`6Zڃ0fo^-}B#hԍ3[80J H+_ ޲b0}#yTmQR2ċ_LJ9.4TfREhĶqk}_kd41U? 1W !k[Ng%Mہx5m/$yM#ѡ0]lnW:<'`vԚlB_CFRlD tyX6 UtB5b;Xp=uGX!f?{ F h1pFj;ה Mǫ(P) U:bX^ϷЬcI"O5s[ɉZ@x49)f ]F 3]̳M[ ,v+[ڑ$ūbQ,lD㵓\*~R|/O`t}z y+2%"dU,ʎoO 6]cDL^]r~W~8ڮ,&!o<'s#EÌ^&j?xu_G/}`7jZs$_qf[Y*LB-g;%DQp1a[.iP;>x븸hT=CF X1C*}S iBGſ#^(4V ,܁DΪnD)LPX̹r3ݮ/ӂ^^Yeٶz&\+w<X߿~}V@%UQCrF dj lTV+ * ?B&(xh&nTtX\ Rdԑɖb 8~ Bf_DM>dcl ;xux A9E`k*T=b`4@nk5Ay\NQ"RIΓ,B%i3/Az0TwU8xdz )fbWs0Hj|C=y0+I.5?}<]/u]0?;I :IL6m%J;L]j&B8 ϔǗRDr|hJrIx V&QgWrXGqn. q٧~k N ghRtA*?RKEeXjBF)5C@y[>ݿ\#>Ԝ6a%TR:Pc´jvU8g w=2Ы>p!MN{A A曚4HW%񲘙_6l^^/o{|S}=}sz{fG7 k>P"ݲ~(?Oo~ssoR\6뵏w5_W̼T&mkizٜ>Xw66ց> B$5!'HwF~cHPr["xD+$BMm٪L迚xh?.F5GVrd;n it?9p6.PP~mAxůϢ.ӚR;ڽ~ӛ:pk f"g6Ά2ٞyaXk6̓:g4S!^Qէuwٗd/n48yF1X$P *M#{SM\dw@`>#%8Mm;Nb9&b;La1^b8ue'=_~ S)rXUߧ߾)0<ۭigߴf t*]ME`d!(r~QM5eZշ i ' Ӿٱ H]2e/g&;g0i$\JbhBLjsy:`"_֘b9~)/;9B0}5""3EN0؜tt&/o|P"g';=~Gi)Hx2/FlO`|`'2!]LX02ﻚ*Sw֋5h3vy:j%"I:_?%'4y}pHxWk9ق?ɽ,ӗd8Vy >sHxWRbP9JD8-c^H+:O'NtռcKr5\{=ޑOTCZǻK"W"V }$§t8y}pHxWN.fćY= XDG!@V*Zi$;̹y S!&B`°3 @Z;q,?}u_[OP 5'g9CTy?V7f@O%iL'Wr鑉Q h!LP”sVkGJ攼ܮ& 71&0j@hk~ ##\[ MTKr2) Ϲb4hDk\ C$DDαUGf"W" N,.1)2ܡ(4R4b(bt4ЄaU'N%/aЩX[ۄf*A= Qcag n ]n @LubNE^}r<;y ,R"-'K0#3T d2 -@N0ҝ8,B 5Jc`lj1:8N:X!GąQK 3N) F*UD1J0$S_.4xme1:A<8%A_Q-L鈕TzRmX8i 1d hӝ8IN=JE*ϑӀQVZ lABI6H##"n;Y}vFh7H"XJ%1Ct*s `$AB8 kKG{qwQMS "K( "Π3'R!eF0R;PmTCmW,vCJTR_(6(h N/dDXsW.rI<F\IS "!fc6A{.%AvňPK#,E!PFa4ޢ41R\qbn<`U zc%Xs>J<2"2D-3IIB <Cp?90~nN\p¨#>A !c!V(;G b"P9+Cǰx> "7PJrEs:G.jqpA&Ě⺰+lZA(_>K w n?Ns3O^uK^VVKgRXQ#)uB&fĄ!NuDiDc<'PjpVzN$ r=]j{l}.RѡNh7i닳K?iMbj|8[rMKM_[yZg*>^,J$O L}+v2Izɦ~OiU(v+{bc{^bgCqI֋ RXyjŞJiQs5Sn*uÐgN"N =c )xR rSG6f)ÃybEs[ y-Tm4յw#zgDVAuJh_H60v+V4׻ugN"NG4HMu8'[N!^s,bdK@}ItBH0;F,_9ꕋ hA:"Y+TD \2OPF4>DLH2$"Jo cSY#)SS.J1qݥ}4.F`S5\tQ~.3!Ϝ%bHM~;nIxB SG#6^06 +:`3h d\T##ľw)9)wsP-oVhw!Ϝ%m[RKEg, <0sbbYɜb AIibd2rK${H5¬Ԋ آcaEMb-`Ѧkpha&J Jx8л+ݺ`3h|y8ѻbC>x1W݊n]0SSܻ(XRj̣tR-Bc$w9P\*qFtq[ac FmNjNTY8pjyP,Rjt~IRlQR[0Ez)d~} y-┚LrݴDz)9)w1:У̻+ݺ`3h|yu$5FVY)9)wq6+ݺ`3ht{m/<} E Z H:2L$^mCRd !ZEL.^(t 4^z%Q2#9eL;:EN;9(Hњ/yuL/wt[ܾ,wL_tl\ݬ_,}/Y/}W^"I2yJ'tcFJ't ](%דJ-0kR r>, pMpI832z]0SSLN~th$ޭĻN6m+ݺ`3h ʱwS|)dM}4F0ܽnfn0SSmt*.^jEmz`3Z p[õӌ֖qbkbqtf+(%.! "bYUS-4)5s"X0HHA4R| :N#žЩFxl:`}yU2sD3\!A(:AeVHJs-uH) 4ĝ.u()0AcŴIeGvTI (9@v+0,0lcz(H]1L4}}cax]M>qz@j;zAvBj 6GhW? B69{o~f1?܀8o|wܘb&wV{u\B$c%P. 9{{7J9B %=3[G~D' \C6L6s<>iMpɗ +ހ]%z`"Bּơ4W77a tZk'_"<}(Y"6"SR.vI"OrRkymg3W/a~s0\_oa(OΉxssi60o@>'Güh2zTtW/?+ÅS:߾{yXN}$Qq{ڙ5La,t8@13EDi^ҞZI.qh/_HH%1:0cRcDL'>2 ?y,zN X"sBf &֫mx S_"nۖ#Z5QAt@Cwr̸#aœAL 2̽fn"U5|rzn}V=A"kڶ40;iHqLX k"nh~Ub$r3R$+m{Mx^#xڱ|ئNcleց: )}ZF~XzUbG Z+:'d}7{.O:)Uzwn+~ @# ȍl0E{w]4v9?ұEQ|8$ζhzq#XRV;;..Iԯ>ed'}}L+b Tݥw=sq增^ 9v2- ZP/sp4PX`YaL8S`L9IM>bHem, 5ӯ'ava¤;2`kmW,* MbE YeD嘒HJum"gpș!o/x$xɌC.S[2N786IH&(6m3 P2<-ZMBIrg&Mf1$wjLe"^bh⨡uk75%8ՙpay/ycI ']rd&9.ZPXbA9EYѶiҡ s Wx~[q!!߸6) dP OnIh> &ߛ|_4 +s$BG%L䃞s! 9.nOB0`+{\hBpq4yZ%O֣AX]o_K fA['SiBfWvU?:ūo/+* !7d9YR@iBD,U,*¥k.;.>MXb3TP .!k#![B HSPl8hwE.HrX%"@0D MDfȸD寧ö Xu ]xc$T@B@\]Oh]$$T@Bqo0GƋ.=1GƋ9]))IJ1KBRۮIDcօ{ٽQ*r 8SHJšFBpk%^ jve؟RȻ-SEI[FƋWh c]Vh ;ZO2Rh=N'\hՏTDSD `i& XCQUvWt#M0ҍH G:"B1H:֚{"k6ɝ(cDvJo\k o *k7&9eypEQ :/nTlߍf G٤vx||!LЧ&zCpL]sۼ ,P"ZE)RGz-1ͣ2}?Rt9cMQe:YLR5xɍA|0Wh_}eUh_'(g`XD2q+&M%(0E2iZG BgHe)֊S4Mr64M'UFm:۳@wQu{zrxsNu$ ݸH?^|wgk!_[$jWZm<]}Y"^dHnS3bgZ>h&A3= 9``@b*K_q[lcѪ1%hf'הYF|{>֡ 3IOtSbl66 c4R&PC"R$ϒX 0 p$D&qJ`e+#+c3cXT\]mX&$)׮<:MR&*0n|:pH]ݓĄ,5-֋ݕ=M?_ʘ=^`LkQv xY+ ~ˇqQ1w0%x|\z`Qf׉ަ-; @,0*7'05"b=Ny͍xSF߹M{}}?f1ӷGĘhePƫ Kw>z&%^#TzH #TIfJ(/5@\[NFzUeZ>a;y’C/]:q\!eyÞK-O~UbqZ}֭j4QoTjGu|tV\{O*fp2zus=>ÆՋ~d{d;%[$&}ӳ{qz|""5@mӢEE.FGWlx'n btK *NHU[f&_׶rǔ,<Ѯ'ӓJͲ(92Q 01Mv޾ I$*aNib4@jj  vEq=E ,߬-icktAJ25U8ٰL=@&-umc!b2m13%v ssR8%s`\Tʈ-??7i XU⠈{eI+95AϡκҜIqX4ƣ̜ lhX[sFpd);E  )v#͗;nfד!km*1&.R ֕qyC:ЃChL]X0JQ5nс0dw7!2^(8Z{ ,lrP\cw齽f-*8Γߠb1rDng]㢫ר/7ۣ0n6:)I5Ldz6nb*RBF\#x Xi9:N(a3jeEn}LdgU'6>HvgES!QfGqښmG>6Mgtj[hgb4G()iZǂɀV#kcS7L@ffkc,N2ol̏ЮtQ`96=Ny# +HkWq+Z^K+uAԏhSt??qNJ^Bki&niNc6MヘiMshRVNKJ$@' wǴsTW;AKS@sX P(` =X\+C=!|5-O rQ++&O9h գd_h.R>xϼ1!my!M זo붼[= Gmy!*1VkZn:ݪ ]L!깕j~{ll^ iʨ=v=W_Awh+u~A`Yh!5{&v%=Z+f\ $bu^=3 6eM_}G:P`!DbO ËFУCoȟt 0tC\p\ CjO(9 R\I( :MO+$ =CF F" 5" 3.#zoHq^!/ (64C< yУ]zWWЀUï5%ߕ閭WnQ^:-.[=:-9K~OJ{域ŇH檋a>UP:[ЧlO> yj:æ'ii{qF-U_gBAJ/4KQ DÁK^>!rE3$?jÅgwA( ׼\ pUjB<gqz㐕5ȇ qCH n[0> F 0$OT(Þ-g;Cx4UG(44_QuQs4y6ϗ?~w6Rȫ;qJEQLHt4^O3ǿ~SiS4hŸzQ ~6& JKiG$cTo1JcL8ZH L(N6ĥFJϗ^Gߧ d F {E_^ŋԎ\G'CWµsa$2gh_V> ZЪf<8VQ5TPTv{G?AQQ? GGVCTӶDqќJI.&J`~uaB%%VR; JJ^L(+Jv%d -[RiKmVU_$a](A81B+FNrf08 Ċ(G$癌H-9b0eQD5ԍ3~v3'coCJ1łS<]U@2C(ci3L0*H%MBfA<,JhFԲ\;o=vF-ZGrڪbQ{l-w}zˆf6fX+o{~guO9bh5تz&f~k_Z M|־l ,h]#\zrQ(Gh-*lydj;G-8:Pۃ/%:L'&Sƕ6RQu#8LcyގWg'W3* (\)K+.$+eJcŸcZ4_qplI7`#z,//57R,V7N! Gmnm6Q [ԅU2AebcEyIy!bd#8Z*(NJ$d*^rg4]/`J!R@efhnླE@$*vOiN&ObarJ*v]mQ &kZtZ>qi SF}tGS쎠$xaU3-ɗcY"*oލstap&?|zpd"),CA;rg8 U%Ĺas@a-76[*I^Cf#M&` F1d6mzyUK'i yd.3%"bK2"!\6J$ixs~w fd۩l1&c˘^<}߯W$P@aDCtd gIݪ 07UEpF֫x4z\mX02D@*Ñ9/ν:[s_:Dħ& D1 "4I̳q !0Kxh,F A-cyarlǬF3e_T-fZel|Ƶ~xKUNyˎn[P&sl|pD)"1A$-(&NTHdD-gU=O0`X ❓ X>_ H! "J`  0y-a(ԱN㘒).#kSZ1gd`BM~zfL;zi"?LdmJLܡȐqM$b$:RP~Q,&T'QCY@`(x-D3k]RaI4M^NqEMvQHbFI:Tf l,YZLmKq>{H6{)eƼڋP#n)_=?g@P )w^7~CHi "|bo|Ӄbx Sg?BO;`~<˭w&KYpྱY]{|K%S-D/OI^3S#lת0ň.XW_x juZikdUZq=+rBEQP~u'ftj/DH%RQ2*f*Rs<1s7 $+0F`*,+E4O YL"?ĊG X@x^xU2Oۻіz7|!!$ Y(_K(z008vyn0Ø?hRsݕ(UҤ2TiRMnn&)O[=?xy0GE ʶ,Qbm͉0P)F)Uc^Gy* 7YQUNq8vuu#1 ٮPϰRa+q"wI%L&qdJ%n~k]D%(U3Ё+hQUfo;{1ȥ.VakOvL'yǸt 1RK&&>M<{Iٞwtk> #:H_{_H,V& eԵpҩ&#/!N5I,A@DOEe2S.NZˎtmֱ) Ę)\UTsuRhKsuJsuuʹ:.ꎟV(=W_:\RXHy < P agxJc*2sg3*2X<^tv4`VvF~҄v[ ؾX`R\ 8LmӞ>"S]+vTգuImZ!3/ƲXj+[xY{yЛMٚ"޿'wB?u4Ybekkmڻޮ67'{3[㉙goq)/t5تz>}̭M|־.So.=ɪ%I Pu$+Z2q֭;~ԄjӝgSaĞRe438#D֜J踤Sq!!_֐)MnMWtW؄ :U&SzIF&Z&1ح sta۵o?<{vk_BzWɮh֣cxm:ujlkl2IkmFE"mwpLL5ؼȲ#',߷(v[jYlEj7+dU͇ә6ڡ:~E9y|}\7)qלd6zA**o/2uK~ڴO nTlQuXiip"ԃ]*a"[*!踜ޔvNIuwL 8`Iߦ 3n k,-e6ndzZ! C{@`Ǧuʙ !(H[J`*FAdk a9 vЀpNi>{/Wu9 0R5eޫ|_yzs48UǻM6>l{ճ,v'z6Qh7Q+ev_U:Ҕe-u;p˞=?9"@D-hDZoZx~Q{G3:M>4S$0ӗqr;[|_SJ gcCi Oam}RR휨^{=8YS*w^3cN=ï>'/)H+TS.+ z9- H 犋 <)uT&$=Mm|= 6_oV_'Xqz[|vϧ|y:UQ| ׬~i9o`Ӛ羴V#r2ǤT2&Q_ya0 ˕C *诏P'<.:}2?}"_͢4b:q[tc9 ;?(ۇ"-iHEu]'7EIaH sjexZ+5hťFJc@!CGHq'ѷ)*ӳ }S "6i%uJUL53Ϊ;~Gr)@97t9eM*yQRAEFFiZR&dtȸ0Z$ʘcFƨBhg)7f8['r|6ld\}i_ZI&G(ykߐ+^ Qm<d-JLyc^+K') D DJa*9} b\KZ QR ָ$T 2BR}麁XHh0 ǥ: \Wu^3rnSvn)e e}#w eWlwoٷ<|~Vy`z׈x :K{} BjV̿oO_~ZYp7 ?Oߙ"sFw>:X7KIGןAe:F,{ww>¸Nd:SHS^_uWse)"Qf2+0nSᢚ*6ML }2]/ay;σ~T(e&}IѪ1~JOSbJ林B3,ی.гHBF#xzsҤh\~ ׷? ?p[)] k”Z¢'yb;g;l4P@W /",m`@,'Ϫ5G}Ru6ϚTum1D_䤩9rB>yА͈2}: 1ra/ i$I69H\PdqB)3Q6CQӞ~S׋ٯKehqcTMʫ"bX Î_og4A 䯷OGLpN/n8Uٓ%KKH$߳t*v7wajt~NC? **@;-h3'_RKE0M, s*+ ?{_(>M{DM,50 gKS0m& ʭ.o^waq7]CfΆ\1L!lNO$G:/Tzv :=cB@߁ڎ?F|!/[ uFm!\T݇j sTN9|\+%#<ΘxFYeVI,6*E_-T4#}9qې 0œlV8:\Gk`ZGeH'߹I RU,c @qz+ wVHJ'n8O9cɘS9T9%~8[ .cJFH@[ӔPR(>Y~}gMtMGێ>~3[Cg S J(5Z2%NQk., ֕/rB VhKCyv2!ђrSD Nq#D`DIvz裣n"ȷ|&q* )Ni)-Ӏ"n @98yiqziNh#)Lv 7QvĨ5Ĉ3xn-y&|4%#7w lOrzelY+8бs- ҽ_ν]N ;z4:ő|!uEɃ3h0n;`x9? Aӣ>%EQI7NVo@9aڲ8i0"l~DP"ms#+V]yG#$ U̪JϝF(0ɔib#"%XhU 2S1¨sMl 4c @ɥ5ܐyi#Sr&P-u \:A 39&Za(W%(ʴC.NZAP[E$V#J[(LŘaRc0 2n衣nhf<}F0>OH A+ى? 굤=wɵS)ToT,p)th:WC Hs[19Qv{Exn/FT *KӄZ<8ǽ\0BJ m`D{rXNf#J17o(V&mfc)DŽ29:F -C ungVwjLQZ ;Lt$R;fP*#Ep45'$( kIL#S4t/YwAqPGF$0 7j+F.yMѮ >PZ4  Zt ]lAȢvb^)NXg@Jv1Ȩ E%1Rhᴕ( +5W{(J)j1)/QPORA B[pwVOi7j BS w _I2FafubH)ͭ\סTqB@65T؜b+SQXs}9U -YtUyNW N;ȡDRgUBRuਔluWS.5L9azTJXg{<@ L쒼S;.5r6J MNpY#wJr4br** c%harF||S2KJ\8GPQ&ndBN9#:F-ན)+^hhFf5@1N @냿bHKj"  h]]/|j_JhY*<6$yz\+Vm;MB Gf*U1*k=Exԫj4CGRN_l?g =&7*\ =%myAp1l de)KW} ~TKy@uط[:vq}}V@vh"\ړC?y©dBR##c)gt899*qjcfKH.6iw#y$pEtgяugq:\oNE :yLjOn<-cet~G}? <98$Əwaln0D^#C̃?]r]ozOd#0?Ξ?=N#S qzSd"_x1J+lzi^YkS'n;]ԼVjFt_ OX7y1X0su jpAAewK6dG'”rc\ɪ Wn{87rϖM( `8bHa(Ĩpт&4!7Z\rvg-b7/Wo݊.I2pFkOO茖ؘPY8VZ '5 &T)\OM쒨eBee,@Z[9PBrhj^I"I$Ae;85&p23(8M*mGUNEԺZ*IR⦢9x]3 k13LX W##28Q; V0q#N9%!ZS\dr .6vY,2صآq#\\9 y"S>D @_OaE)9%FzJ^Ҝ }^±ˋ"RYb\k9Ef/JYo΋Rѹ$ GC(%fFbQƨBK ''fH@eR;ZHקx/-=b|4꿄 Y`XUؿW%ʑl/w!H~@-7ZH篭,N"i4RjI\!gC0i2Z2N[ݓ!_),N [0IaL  N=9ȳMq)/ ,qDzC>S f`:H%QC~bqЊZNP rOpPk bCbPy2ɬ ^рytQG)&NZ-gIaPڌpX>K<"`X;`cPlX ^'ѳ NopV% ph@ҁ ( L^=6YJ S+'[M ?wS=4a wk/m(~"^9 骮N6o PřJrUM%Bv l,M* hs+ΐB ҩkwǏ\Gח MSB}/N'Cr&ɃK? 7]nz=}R z>LEh]}۔KUwf% 77pT3ƫ4X4|n톺ܢjc ۚ`W%:"Y Q16Dr/"9;^rxWBXoF!9n?_¬fοLXfh_^e`m\Iru: flWex5mNnӉ,ʼn8k[?m!|xSIxgM7Var: VZrOfP%o/5GD6^v,XJh@go20F`m̃]6×gȦts!6m> 3E2d|u Ÿ/tnٰޙmҼxq4 2Ē7A gu[k+:W55+f\ӈ\~|h[޿&v{~+\ݖnrѰ#G'-vVj*-ۯW:w1}iԵ|񴕴Gk6YUk~?wtv$YuM֞oHYH;RE^ \& a\@Thj~9MT eP;K&r:D[Mv(%TpEMMhKIuWQ=՞+4|;1PmxnC*e>utV?Ԋwt x~ˎ '0_a2_/sd6w|7AQ}$3pD#-$(r:6wZ%Q3nt:9)4ZWAȃBkJcU+yh_^</Ym S2ܢY.=EV1Y9$7#r#O i %$9GדQBl)s$?GWP) * g'xٚu5qlZC[iCOIIّ,Q4SdCyd6J'U)25݁ם '}|! Jh68EvF-*W`L)29z9TMTIx̩\ͧҥ A_:E><\f\!2^׌;E&gk L,awǣ^>f ._} Q9bFث5 Bq*MerM@ x6 W7-U(lfQ}ZMB1 aU@z8f}&շ UII^gbqe|wM¡PMFq5 E Jku&_IF q^Qٚ`Zs`0Q=娭xxGfl~=+gJ9>pxqʽ v%imx\ EzS* m3Txx r7m(h8MɃLSJ[U/ j~9ĦmwWf߿Ent9שXRK0dL5LsV(˷4eMLi*LyCvu@7֬p*M^(hD1t@i`⨹u&Td1*Jz= x*Ae[`DrXk)l!1J >J$&xiЖ蘣-f ȀA ;7~ \IQi$brJRJ\J@9a6"m4RP᰷I=J#gC$#d[6dk  ;%%2x% @rHv"q]m9AuA s qD':X {DG"ΫDAkHr3I$-h?D^Hg$qDNu :<,`o@&fB{35o= =B{,뗄v\?}%)f:`qW|}g90Lffוm?+ි|~U60++yUN'щWٴߩ_e~lTU/Kh|^;}r8VߚdN7 `hI?kip70 @aS$*J$1 `Lļd Q\F_X Rgk.&|bBOg/`asi'EEp1sNY38O)Cgk&.&]Lrf.Χc]L>_X5>_X˳i.~ĝU\&~\dF Nw3|2}3|_s18 3m@Kμpy{"C_#TLBU؁ 6zږX>r` u.xTx1 &OڑR{bg%ZݒSW%ie0mAS%qqLvO+SV|G1^5S}0fc?N& Soz_'%~Ja˙~â\w g3M 迌1o ?1d1髋/,aɽ/[h+r}맫bkb[(z09 \J*5eR :5S̵͵ X{/×up"3)ܞ{+V^g؋(8sl_ AΖ0as nP%]9?s=|!e"f5t|@/\~=A"{S#A&bFЦ6?[ըsZ^\|_4%Գ] 7q=KMw/g_$]AaI.y/NS ;h_}s(=sSyI68jv.dS7=nɐ7xs~ˁϚέ{9ugx6xK@sw?}=t8}Jr lƁEo8242(`.g$po@/gݠSҔSb}Q|9;_KlJ)B /1)(L)K=nG3d"R2؈(%)Ŏq1~O5)EJVJܺ^gS"@@3WNrqod< ľF5A(yth}<֘cQMkcḵܤ&piu5f8!֘O7YcBXc5F5A(r|5f5XcnT4:ek'Qck̂|B1DYM+k1 )d1s ֘%B:֘cIM1KBY1s@6k5fXc5&5 r|5fH<kjj ḵܤ&j̪\5XcnB8Zs17 1+t1s<.5Xcn@4!GxYk'Qclӈy5fb9֘՘Ʊkjj ḵܤ&Lmڕ3C|xט%RD>XOΏ A 3&;`2b SƇbcXf)Ҧ0I0Dr|Ln[Ofu|ϲ+ 3y9GuH8:)E6%EHTh#ǯZ@Kx;9LƸ6!e\Iu4%b_ĕ J D=ώ[$G #ΨRyiLY:wF2Thl$8BBYHZ4mW疂I敷VccjL!OT!Ւz$TPt",g 6Vie`R%C9EeLcI$!WOB' 1*ZPYi.uVp AA\s+ UE6(thI3J`!bJ*B ȟ0 6NA8rH: N'f U#!Cm$ s}!*tnrg`95˙p&[02VHH*PW$ð5=1H,4AR gPft1 <+$1 $T?pFp8d5-56qkd8om@:?V>!Îxe}{fddG_0 ,ܐ9Oc̿L&(OosqN,pI./*c\< 3|[\wCs)>G!'ۛңd尿^sނOqZI:+3os:'`+! xİk/~ܝMhܖAP#[g3o?O<1kHG4㵞臷TyaA_1ze-4ea}rzB\RY(:m%$xŭ6&5\σY֔" _t2D0|cw8WUPWj;^M~326k5yB8Ey‚l\5ț[rۯ(sJzeR,e»TK!i[W~N*]b EqgB`: f- N=c}g26Y-FBFK!Y2rbB{c'mQ%[tIV2HƦcba?B$d,k15F>y;lQH{aswAoU4rN܀dnߤ>`+ Se.CtqX0'k; \?~GK0gA~d8~/63asoV 6 ʽg1wlng;O;K꺣pغS=O!ѩ&B:Lv ոj$gݏ*ˇuST17Y;{dd c !c XjW2ҷ%NLw8YvϧXՓI·Iɖ8MYQ(/y]tĘy"hh 10OE-#l{q%9)f8W !T2WP'zѐu7pq yr7~k<;-6E>{6|A;^_פ4"q C1չ6L("r!i!BL~}ɛ_~ 9{3%9 tIOlT۰nvv3N U@?uݬ}=xdS-Yy=} ~QKCдtWYLKXY}ٖf4QT.75Q\Eys+p~(>r&W/j\=})*jTAU]ƫk bc$6L5"BY1r`s1)3ι<#1;d4 1i S(H"N*5bN/0P͆:[:!ĠFi:0=&mD(]plr뗇u]7y048&QF]ɽQ%FHQbm qE$DTxC2L3sGKy]I[ !όtExEb6\ uFUAufں$ߴzy bI:KjG9EqMoy7l LYL٦($L!!\ L2kXiHA!г=}.Hqat|NEDm/넪Hiզo;޵p"!2gfH%<>^@lJIp l0WPdB+|EdC4_6]Һ-8!|?S~ ^_\?yPCnުׯߍ_LɭF"9wr/QvG4l"4<Հ o?-~%sNiZqQڗ_붓wM?{D[Of?~7)|0ՄoIMgyw§GTiPL–,wίb+B0fѢ=3{6 <*T` g@ .`Kw ,.zͻ=Hxܽ߁U-ܫ};a  pw zApA x0R? ?tO8[ijboWyQiuZ+|`1-ƚn~T)MX6DSkڶ&̔m\U`B \! ҴFwܒzg.OhB\j Ɗ=wQ;80ƭ[@BqNFAfLr*&Q*^i) H? 1K*+NuFr8yT%>irѸ r^MjʣvV<1Dq1%dȻm4˻=nkhq J.B KFsn2\ml_!Z=>a!a}cXn UB{(qրN%"*,@kBDɥ6Kd"JUj|N!ay4K3q2)r F( 3,qfo~mΕ^9"!w6\XxEi>>NN)t*AO eiSYrSS=de/iQZT›OƔ-H$|>EtAi40ߌ42-lJu7<_~в{?]YBYB輬q BiqR 6|6z#tĩ#7F#fGljwW6lWReX$w-+^X^yczaqϠ'Y,~Yɠ}?|*+rĄ؊WpG[ wLROs!_PܕkHdG9toGN0:x^B:䥟ޜCA<^@ҼaTɌ gl+#{!J:Zަnsϯ&#N#w}sJ-i30 Nbl hO~P { =]i9v 'dzQYd$Yc+KHնE_RaO+%/ߚ-u*;buߋ:@;F>'#П\pmyu5C62 tySRk[H=w ~5:q2̗G6xj6ԓL?py@˕Q. }e: XCL29mNOZژ%.y(RnCX=U0vɳvnՈK8˙:]Rc<Zw1_Tf9Qڱ$`UL/̫o1*xPs[^">ln:Äs9pƵy?',@rsxFbUkY.kc\VQU8chHaoǜiʻbJ`4=v&'-`oF5m[ii_e"Z4=ꖒv[\Jt(閍 n+%EւQͺ Hs-s^ 7Zˀb._ъvHPG4WW6}$# H. u.]bc={T%?BkD@p/Y6~ۈS_~ukÜ1HaY(ױ;yWpF84:5.V^ywQ=#$3Z>Ksl_7T\@=T=;/|[3oH"˱u9evutķ_wp5-`*٢%3Zz>~F'A@ϠQ%~>LZ+q˭Jj3WF^Nǜ9QO(ѿ1}^eS}!Ɯ9+nw SE*k"y|#ZlnQ@PB^rb"D*뽼%T`Q%-~í&_gjZ+`OC,'Dn~ˋ=WwK~w jU]涌ﯧ>>9ryOyOFds]Iv9'usUOoqE*mRV'㴖'ȝ%Φ#3ɔ"K=]|LC697>~r*s!ID]T~qLl<ɳ$Ɠ<Oq |4hD"haSl $_# Ҁ&rԨb"}Q+"}gY+F =^:"W;drh*CjUEXhM6\vEW(B$ƌ1BHD7n~sV,_u /ʒ0Ok3 iKM;ٝG,@털,DPG zfE' 0Gάb6DM&!Ph 񒛔wyAy3 NuJ;BLb0o5RI+q&L2:cKAmybͅX9Qh }B!0\i_ 3W:6Y[:kH#"N$| ҃\hJ,I,8Itzj"hݎR18,1CwkF84ʡ\Bx -iWጺ#e12pn~A-q.VH Bon-i8{`4-F[\ڽ|7EsȧO~͔X1=5Uh/N:e3#$R2lE2AZnԓyQUeka1Kre)qCAn=0<(y/-:Ђ*p:F,sZLLM l̊)߬s90]r{5.`5h Ph1j-wmMsx8@gc3%Z5bڋHĝf U9Z}ƍ(|D Ƅќ(^4{vO Aa2Rn" |?YrL) D9iM%|nx~Mlj!OMl ݶv,34iw8Jy)p1NXieGz~V+l)#`xPSxk)^TNbnl ZN^y׉R:0l3Ȁ_x+PČZKo(Nܤ 4 HIbҌz$g8gP\B"f@N3 "}nj!͍Ompݻ>ԤQgVhԓ4@n3 ឪ,oo#o~' a +HO1k<]z_ йOyq3inx- 9F5ڜf؇rSm >y{7YoOǷk#.RIr RYAHĨFN^1s1:纾U=3ZK;تze zSk:<2R/+ !sTr G?kcvhȫ卐D_\8X1jrG$Ie<&QVqS6$ JK< hR eDy %VZ8h-vn`պm17Cj%hKiKCjpØh0d5a>껫Bynt|d\zfiAkgKbyJ2h*64X$SPT[tZ "͑ݿ1~b4+Z(ZZV3D2 iƔf\ .1 Tgښ۸_QV-)\T)n&=*Ŵh'1CIib,3,j8@_4NyNZDF>4_VZ9jnyf4zVA Ju:eqY. : ZDP_G(7YTڇY/nD 5ur>[\=C %$ۣ gqK:vϪpg7^SBFQǼus'˿t:ߤW7-Th%M.w_ۤ? k򼋏F{C|5~eHCr=\Vћ=zR?rG VGFQsARwcrN)ޞ5;z Muˈ 0HUv&mP!ٓ٩91R!C/zy2 WDޓoI/ ./Uˍ n|}vLD|a UJyu4?;]Ә-\9Y^U?|pJ\{SJM2k M -T4?q!?X<5>9xMp h4ք3inGÿlnwtXO4nN+vYunExu~./Wb=EA>^d CB8KKݖ}.~H!ÅQGbrX愳ZP1^a:R#FPR2wY@+/Ҝ 6D+'4*j T!wQGL7CC5uwLgOCie8,gQEk2}<8Y" :cƻQ}ÿVgpz>$hJ[="Omp[(C ^d+p:-(@*Daj!CD8?:֡I$.k/iNx4CKX(#gOL"WRc*zּPʖ g=GG[_(V 8w1Z%c ɳDL{ZǓ2'c>PRO;rT-058iM'?-#c_NVΑS2v` &ؗ22ye>M,e>22-˴ .2l/v-@JRyCZA%\H$3Ԧ|#p\g(GkRQ0{f%L20-2ۭ "iōN-:A$Z{Uyb H*) SGОidЋm?\Rg_'b2'->y"ZDoLVa .`R f|%]jj Lq +zSϙ{kP*Wsfb\v3 ؠcV1 $Q'WGi0jD3~,k9Ƌ9Û"B΍ₐ^EfPsJma{*a1SG?GB9s Yz;wo~p@~M!O3ClU"> T}$ʸT-t:`U:K[BMzCdf|Ie1#Zw.+gnh.X:}w_G1u%&ԋssƘ PU _:|S_͘AR 3eߵb{)xӥzt5('5ż0_љ~OYVeo ԟ |l~X?my'Y Q i<+K*= As! AMMɠ xg#BAr&4<:DjGoՒ5?ﻚ5_pI:X;~\7 fpz=_\_/wdIp/_\$Δ[m;~hpO<-!Ŧ˝V׀bb^A'+d Iު> f괄04t܌ 7< CKc^sIiN!Z!xٗI/f_&;lkv!X1$ Jh2 %:tE-,R H6Bt"`*NRq)azoFsv.JG E<`"5<2D`Z%d4NH%X ZAJ m֖nPm]9%[:pꀿYwnĤ5G]W^ 9kEn6>}'%`RΖv{%„TQVQ]C5&V+!1T֔d`|XJQLorsn]tL ukɉ0XԷ5&w;Kp34\(nƣFgO^W~.9cma=NT/in_}$K;jFڎ.}sޯS3+$sܨZ$!9%Nıl\&s=@i]l0aFv˃2Rc5E˧Iyn~E:&zI/_&lx;J1jF-ahutIX*[G)*:44!8g6;Z:PQtRg>D79aNO=kmJeL7šm`S7!Gy&cPaE>pdսdk;B"k^N]"< Xl ϑ h,:WJƮ*U,/ԺTީ3:S-8WrFq#ؒ;XwmŶ9<9@P:]UÏ|ӓq4ʫtPv8!9-ĤvL^ pRp<*pϱX ƍQ" Les3zViZ*9N-Q Z' MQi 4W@ `ۭivyRA̺9!>DI S1mUÌv*hM tX x1s)4A׀œS΄|t0)EIcEFQ<:2JgH/0Zgjn{ шh$` HztիU&S1:] !Aʇ놊O$f z;JbjrM6C-ʛ$yR-J{[/f7h@f9\G%wFusL1%ALJS7?LwM)}jbcoWENnXJ%2e0O߮3WgoZg8Ֆkֳ%6.$T9i%I'22'BS djךQMyc0|^k=!}\»XwqТڸeqrӢ|k-lȃ&J="7"J(+ X0%_N' Ɔd2s13<I G>Pdt f(nX.ʔV(@=P6NX✷RjLP Nӎz ɅLerQͩ+yW'Le]e*"߄UьX*c*pUhUU@UT#;ZL}ߌttIAw\.U64NB6H:p}կHQgV12" ʹ,VVڪَ1>k|l"9bjxU' W;8I&d" |IIw#ؑR f6f.fI>XRX"ƻ`H&ТƋ@qi&4Jc#}bcҼimU8`Dhue|Xu(:(8 {peSex{uZ~Z5/FԜtùd2aE6WRR!9',2蟰<2\?m Ss ܄E"xs*sX(ש&. ЯzZ5jn{ָQuA)RԌMZt/{]e:$]\7_E~ZGX\Y]ʌA{Q^$Hڪ2ZQzW׾GA@o9B`@f2,9ŽYG qzTG"hE>PA] ׎;Uu+>GЛ.A/6H@@RXIvdj1bҕT@bFBXLhJSGUHpG@)% oWfh]Xf> j"fxS.4y|ON7EX/=#Sٵp,?4%c<))t"piKʩXrOVT2ԭviT~|hiM#;CH80gj{۸_ǒ"M'[Iz).u#[$;q;\ZmW+m"vgrfXώ.䊁}\2ƟŒQd&+*j-ۧv1#&JR צ"IX>B- Sd½dR:C8j'4_c5lSbV#-KԎ$Ȥ2 5Hn.4e%P\BĺpA |v;4 # 8[J6%jcqD{XpKd H,) $*usDQ+¡+e_ȠTrYV([cq>TD:u{ V?$j ]FVEQ,I[pɺWkmr[)H0p|.Re."dՐWذ*Z em AFUw$M<ˬgQ7ߗY;kTڛ>J[Iskgr2V]R\v e`R]xJ)^ᕧ*LJSVQiyo݂8jϞ-ݻ2|w>/PJH8 .sq> >6tSczzf+{ysA\>|n#og*Z& 5FЪ/;hKyW1_NF}8\274}i2EKd%,g/@ K5)3[j#_n,)Cқ*3=H%"s1-nEn;--Fe\]4H |Ǽq`HZKo$-0<$֝WS<2wLj?Ac'B<T}cicGً^{?\ENѢu|jEͶ ٦qU#֓cShM9lLX1/5Rlřެ]:)֗3QAHp4Njɤ6LN ۴P 3E5G8EQ^pniK!/c41۫!wm漑eӶwW=8 ;Љ)E)^-c; 'R)\%[FL|4W'~,Ю Z>2+h4Lʋ@.DoJ2td6v$D*._&e(T΄^&%8Rz /!5)R"ˈ!Lu w~ufBŔO[zbڷ,fjwujΩ^w` - m1VB$L$;ai,+H1- Rz狟T~7<,lj|}z(sOr( sFNQlab3̙^D9jxi5s'Ki<%Q:KY}iƐ~k dk WL߰g9:C{J'glcAհ :ces/o*+J$C(>kpΥBu̵7x'Z4ܤKqa .br1G)ģsOϑ9 j_*3S_ 3jzkc*¸D@ [0rM90'ai8/`lTteYywV* :Fyեա'8!ɉ݊3 j] * \`hIV-6O%&l޸y|l3cD."d 'e%bK tJV)1V?;^"&uk`6 űBaS}ܕXujbFV~=G\Qjn]P|+$~G֫rf؝ZXQ5vpEe0RVÁ:0b${Utl#n&`L%ݺ䝉[ERw)qv [ح$ NUUgl= :My61.X)kxhKk۴HSȭY$'c.E8;e>%ڠ|3<3"|pF(~ťR2\TXJu5_ h;~1ƏL@q d&.F_?﯆>P7_[ <_Zz_Ua_U}`ÜE֧T"%1RsI:"e1.' J K&e=]|}4G#w:FQN#|1n0#79!m5{3lLN]O=p>{iE 1k#5C)O$LIB,rRK,h]u.[7u.[7˖i0 d)fABãXXx`g:igT S)A1!UVR#lOb1y!כß$c61)O|i~Dͽf=5bk˫pS &B!*pJ-`kN !Rt4\nEmtcvB3TM9 :`5HxRAHI0"LKKd$-`c]ٳv4#k,`C]Y}x;ls*ekJ"y̴T @P+gYCGR,י[A߃Bi&ڃf: 6؋a7)wn 'evb$Bpgw7ҋЊe^H\0 $hvH_]wM/3P lx$­9&o_6'3|sٵږS&mJ yo|D[ kG(Dh368?WN-{]1,C?@ȐhNmwΐ]N0ڥ$Z-H%\$ČhKWU7,";&d ;vYF&Iz[h0>'Vl\[֌P%h7о쟕R*Lan$I,`=DP e|u7>zC69$dT)*8&'U&[sjRkmKP\oj }gfv3Q:<:|*@ә{l6e݌*DP#$#+>|NYr*8wE2Wq?sin|JNK ƛ3(ao[H8b)-mL"qڷiz=|OV3>g[oun;r3G[o~kfkrdjs{.+T0i6k|1r;K6B1I7SL$zYlxJ"[ζfemڝF &j&ޜ~6@(dRL2xBF=m.Sg+DYr1:أXyq̈U9Ni, *2da=)͘\@dJҔ:)Qڜsn0ys:(Aša('("jeiV2@LH*Bܢ J(W&1^QR%b}Kل F7L0atƭ c!<, 3b^SAqb*0J0% b(O*P?]U7܎p (p'a+PR!=J')+$u"D&5Np<')7m_~/(Pbt2GWQg MÅ/_/*_32'㣣߿{ś]Ah;^^럺?1{?/~ǻ;M-q[EoYo27[Yozg׳ _B;l- ܐ%9'i?v{-osÙֽwIvymzh04'٢D-΂Qֻt7 LCɒRs&+Kh=5UwS-<}窠pq<ݝ' +8I3к&ZVua@Uل Oޯ{Gp1E yRշשRKG=Owy/ph)ܟk=Uav~3:R؅zlfw 4CI4wo_^iP˯ם?WAoތzN18<_|>z]Hο:8 :p W`N~馁ñ59ً~ .Wf\ɾ8OΆp_(޻ 0c6`̾fi^¬)'Bf:2:W_BYg^6ƽߐ׳$[[ݗud8l]nE_1U\a4Va9t0FP Krj?UF(lN=18g17kn9 &\|}X-\pkׯ e,"qf t', (OO5 JP O@KW^pvO@s?!lٓqfP] QHY P P kck?j?a@ 1\ rg2J>gx8߀tYbG&Vz %Wo-4)&` k^{@:U\K,V9L5,NPU 'ehXc8:bﱵD!$U`S X&LQ]4j (T՝d[sEhu)f] Y`Q ed 5m*4A9\:֚sde0bWSm5-.tLJK^)dAA^z|D,39H?{qruCuߪsWα˛Mا.cHjTC)i{82Z4~լ-:%2]̀fU(t[er PCNe> (:du #P&aJ())jkk!6 Li˰t)PHٮB dLql Z&a8!Ul!D 溎#\mfI(A[n0͹@fh¥m"q({:s)X.A_p+^mFJ>ev7=ЧpJXJu< yhͿF|E@z⁺7.tu %6EMujTx6i?Jmq8Ւʊ$^*oשy5݋7e2Q˞.nTKց,`} 4PsYې3m0-mEJFK6FIvApb*^KZKB`^Rܶ$ $)N:&(+F;rՎ-!>g*VMxg\ $ b !LH<)u Tp9.(\auy ,`WYԘ@`EYz,NFڶuTъ$YUil<[dQNO-qIQ֢8M7vt15B"`3%9+AB6URK2-#VQt!/b) g(NN(PGF- bdkV/{+#ia꧅>$JP%IiCi3:${0HY˹ێ ԴRz(}6ctȥK1zq)*kO}#A[N}Cc,XXC%DM}P"ITMqZB"ql - 5 Tm}{1 ฼(B }CנН7cL,T7MnI^Gb EjS5 3#큵ԱW ]G[,bښP,9ګccpڌpI{ב&wpd@Pl9rl.2u70͝iA칅~)!-"P"]ތ_3Ť(򻹝 R}͡=O+S\ ՠhfŦ.QC>"|^O*xnBo'DiU8%jYDVN'+-W67rϸF'iqxRNn Fn#X B5_ OJ1[$r/g6eoXdJ@vYǤiu!哻x_T g:|W"- 丈ٶ]\^,27 &‰ٞAѢ m>]DW?Mfn4N {BYȀ\Z= PԐ|5s"F"i %(Hg#rHWR>[ܞdh);eYQ(gKۯ-Rew;n"7Ci5bqy 2J6+MrJSTxls'\–E@D3ä,7@r&! X2^grJ밀 XgSf%w_ jcF0A{gυ y9A8"U`R/-r$L[k5VM, &-PQ4 dk8^ $۠U z`X6WdYLPsvBAh{aM6mh-~/61vbʜtXVbhHCa L-#ypTIDQ(* BBSQ>T "UkuU|C0<پl9 K(͇r6a09;޼DJr@NZ;QM_/aL6%iû`L}qqKƒ&hKvwo=x"<(pC xwn'%DCpx(Pp;x=zSg>Fr{m1s:>.w6}Ùw⛶߬ga?&U'@ek?̞XG5g8s!<&6>7 Fe%x<ϖ3+4}v7qˍ]0{޷jjH@ܢH!6ltS?T7oKK7ŏI"~|L#)݌T*GK|@[eO_\^ltrUTn75RBGș&>odѽS>t40`MƕVdRVŸ)od:P\, tvu ׿QyQ5j :XJFRry4 j5^ 9P:tzppRjcps%y5 )9C6q o4h+#'B&9>b"jL99ɆsfB:} (|~lV~g:돩,n拦SwS\6nsOo6^T$ߎo ɻ?(?]R{W~X}Z{'J aW OVAVZGrۘ!\KFPtCkpdu޸;G fT GHroiV3 ;[N &R$4&㣘K&nɨ68X(W^dD)4 [jI9o1)瓛hnG_..q0C֋ kL9(oc+K\݌?dxczGq2h$Ӭ3"6 &6Fv,a.Up@ZNeW"\N/鑇$xknTҎgv<u&,I7$)]@Nl0i?ﵖ]B^D!!4{Nz9Pj*Nz:N,T7-G0ƎykTFޑi*.Qxt-BKm# JCYnFHÏYq7E^_dパF:h䮫Fn9ƙ20y۰6UU\+Kxr%c܄iA:%tq6J\_-{wv+ʃ/+cp+q_ M;`|M;d6!BRTCW-Bʾt-ed e\)i*/$8Yh逅xOh  QFN*@'Ʀ؀IdEQD' * iIiFSZs6ˏ̀kogƃ ?]%.0&|}'bj`R< rINXx1-mD"e7VoC}TH=G/Qz,h5qmaH!%h.7:vUsEJzhJr'%F{j~HSV# \&l*HUfx6J 6^sFq<[;1D/"ZZ0CTxmPŎM2_iR+K12&Zb=njU/Ud ikgB!6cպ<&RoDDpa FDF+^ (X¸h/X\Ġqh97Mh' W1{sq: [*Y,!r]q+4؁0muS-:…$C7Aw+29&$[-c&w}y'VP>dН=Kиrӌk\}7o@73uH!gwHuiuym>AXrЌfyI=Œާ/oBPB䷱s7̻hPHP+] .R%m8&2s0JlU)?(ٞFPnzfS$,͸,ѧ<PFe9t??gI>MϣSgnxX>xF>A@״O n.(I 냟0pt!m P^ra#VНNfV>EAHrUd[Wq GL/ZJvDD ]휏mDW}=gޭ g v~>&]s|Mg+s>R]rχTWr8GD - ^ke7BvCf r1"3Two^Ͽ};$/"eχw?&XuYpdզQ;HEU5k[0 L$ͨ\^_.eY]HARz޵m,"KC.&Aqr iOqQ\&BlI$~fI=(JCJ"heZ΋;ALƑR;T3ӊ!&I$goGapMY-/3jzMD6fz Z p1J02UvxBskV2X-&vӷwܲ c%-MpAkX EhV 8a} wsH{Xs{tӽWN^uoK?؄Xm4$hZ !T, WA*C̹7~UүtxSbуhOݚV`ʻ['Zo롚R =x~PS ze)K`ŀ cw:+MWe&c< huRx_ۦ,aG&}SIDM ;@Nds%4&]ڸB)J+Ekn]gI ~|PC{p)u[څCp"ș80}RI|Œ5p^9rz*/JapMc Q8b!ҐZL()p58PD]U}xA:<аvѢȓy5Z>VU2Z 0/D  (JBy35J. E%OA2bɠ;꟝UF2]9)P kE{۪Sap[IՎ8OCa^b%RKekٯsUHʨ@=?D2k՜dIIhiݮڨnyǬ9 5y[sjHX JpK!iJۻ`vw\L=toƷ=A.ԹOn:Ϭ-tOً'w)Wȋ7w~ ɝM<7 WYԐܟ6̃׉=Mosm~=s2oҀ{q {e$g[VMh3BS[FI±575q#PrP^غ3 rNEW_1Ėobuׯv=}ҿ6#l efڝ7rt=0w_F0U8j0M_"wp)zJ_OșgO?r2xex>~|1vI$aЮ{vh4dN'w'31\_>,u:wॣs2Mʠh! mF7c/(d<~2s2 ،g\ef8ݧeLq nvviZSl$#lwzf/g %U035\5ߴ~!vN^9YzCՕExY3uN uvXSκKg nTAů^|yKK cvC$FP( u3q(r%Cnn ʷ 8C4廻nQo9r.ޭXu>#/@ʿsmdo!/O.|YL:Ѭ%2XL˓~1dTz3&%\ V^.fvx-.iA-x]_yPf Jau/]2Y5IW.d*KPÿ5&GݪGt>vcH!if<[h%ʝo84њvsNAh4(*ܞv|pW !_.SK\fb 5)?NrMfvMk!R !j{F#>:c$]|̪tùޝ.l'QO)0c9'ŝTJ ġU @ Ll2{݀f(z/:S%WrR3/ώD͏A9vb;vz~OR.ON&WRh L͸zG A ,>1&ΝN}()%8wAެ@Zpow~2'IkBJ͈vs0B׎my6W6f#p xynȒDh g0ܸ9E"b"V kLh#"B&xbd ͛k;سG N2cQA:6aV-/a%y/<܌$KrE쾄 ,%ZdJ"wEPkZ}AXs5ۜ ZMgF$crd*`QO]dh|j4:EEMk;9;acu+4 c]EDq$u0=&%lu /NoyƑSUΆ<5VUM ڥV䊵1/+V!y={$o%}D(nO$TW,TB ;YiJfϭ"x)Vz=&unF-18fSU7ϯo8 fW)*KPe44|j)k(ttIشni:(~X:|~,I!QyuUr1ƞ#iU{):Y5t& cDMꛝq kQCiLJUI%v(V o`A1Xzd/>_rމ[V# ~yV.ۗW:-j4kI3Jz%re`ǿUXClKOME +aD+ /~걠-I #!#Klt 2 JGBuJ$,#ƺkC;48y#ZV ﱖM0xw3j&46|;}߮SĩlC[ᨐMqgB!ݠvX(e5WvY fգli A:z.sR΍ʯy5ø&7+ 1KVR{ W:Qh0 Ӝ:n[H v#KV\q$]LASOQ!\x`%ͻ,LICjruLy MKVhR;D[7zPYliP /Ƕ$2`Aȉ6uJ|oF 1H פHF:i&h8!wKx"0˿= yJ'm#2-ebA="culY-XeTÎ ||l^M P%#,q'C њc;mXWGWÏ({N<*;{Jn|{QGh#`4a hoj%^WZY+7Tz?Tx.%#ϲ]+ ^UEj0X1 ?(t6x/Elpn ӖhZQ ڣ/c,:8Weб Fh(zN|XpFލ7UrEg@?lr7NC6TMw&?ޠlY%׼,Ҽs\֖+jM(uT\+_–Q(ꨍ \,KEȦ I F|QT֤֊Z=?Osʌ1OAT"#)#?e$2MPq!A[P3A އɓ,:oY?< )2W_ 2}[F¿}L_|0- hLRz-)-U6B KVR2hPݠޣv'4 ٢cnInd ВDPF{H=>K|z~rմ騑ӽnu/I^%%49(|{)fBSo% G?v=:?aɂТ^SzQt2g" U|wh2B2J̿ݻ>%R-IN~͏'(Rjy=$ IfƆw![Rre=3% у,:%ECr/4 O.e_&e'O3^'mk9h @[8Rov|\?H |R/= -E}J)%VPaB㨇IqtYF,cYe+4T4TTKsH5U ~Ae_"_ơ}F'dp 7laSAЄ ]TJ T|'*@QgQ-P+F|*6#~ A[]Kʂj^N&: 4a \HԊtH-hҵnlXQ&FP/Al[FULh:[&DJNc#ݧpV&\ɳC#"ҼTsaM[Hϩn2@U^L䙔+}{)Xi1VF\g3ei9j)1ܢ壌[刄R^H+-5u2(p> ژ+#]ЧFq#٠2%-t8IBVDIMæ!QS^JQb04[']YO9he$fJl]1BYVbٓ LТĪ3u\/*+UJWڃ$QNӱ9}zm%"OEiͰ^1 -J()Y]X๧`qMFws'_O.')E$ieeW_Oz^&~[|rOj&2M,M95uwگ}^ng=X${5N8<7ƥє!z邡.KVG]yp{wqdYEf4;غll~7rqP2Yj7d濡/l+º3n2*&h[kv߃)A-%>y /pr@GڛA>UKWLӵ6lQpF?ph`R +BC^;*"}k;)AK"L7)JzWզP2DQ\:4eQ{P>茖QDl S}F7ܟT"},gDXqF8<~ c%/gp}w&`YHz W[k`oo~H&a<}QS6qF̸jJ弇s8\a O d8tWwd+'Lڼf*Qiu"d~Ϗ=m2Ὕ\چ$H*v-]z0t*RtݙRK~Z߾.7c0n^Q tW]i>JdrñۥЬ5vm ^Z@unʲS #4Fk[Z/omoZ?ٱr79;=mxi +.LL!\HGq)OT22/%yYB&|w QI.j vn%ѼsЙ&,BE]Av5}z\FWQW>G9TQ^]èq8e]wg$hmIYDgi(Vyxf/HfP"u_Zwٱ sm W1)P+g-n\E=~twQKC>NUcgg!✇ /vy3Jm1%(]p=ok6_>5ڊ"Ah,jTV ud随܌9؜a 7;܎VG62-Ȯͭde5O)G-6G5_ t bH~.7\/#%j ³"Ix@'j)COq2()kGB9ZG]{jʡ-89UdC_9gн!n3{^sM5'.r>R9Q\;T੐z9%Sx#iѻŷ9aRʕq+ɍVBsSs#@=y!zd}LoGU-kɟt=ҹ +ҭGtU!~3?/Ryq#`=pC6ݥ{䫛O04Oz4Zi)VVs]s_qhFfJ>(?"w35g;#y2 v,ӵuStm 1c,#͟ud=)cR90~:LKޚd'L;׃QrNΙ<9gFL5qbe 5^8,Qۗۻ/Еn?0ۻ3(,tj,#lr>uX`\r A ]`<͹sArh @Cdzͥ6{i& ճ LC$w` n7S(T'3|1'} _3>?2MΜxj( 6_^hTτZ1ESrz55F]tfRweQTܳ^Iwat1n+" 6uWDI.'=KqnQ'_jNg>M8k3<{B<5|4ojI b%o6'R-PMs[NԎ*j=hK;kR>4[VLpH4Ga K$=8KK[6 k,zsɹ]m$-ja(&?sPm`Ci\\^ l?j{cm%@pdo5xl:șPpCP ]\ ˮ3'V.g!9]<.o%XϗI Q~!.f–L";҈Rզ2y2QD`z-;}yʛȭ r|pG,x>*dT뽕[aX\ ?YvTJ]\3s6i3k*;+;.3o e=]Yo#9+_e" v>;綻]e_RJISLIeuUm;d<#2;;xOřGGq{7z삨\y{8P^e7cCWny{";F6n5yP%({[FKC`HN\^) QG# DÀZ)dQJH@;a=>3#)]J Y-4bIoY)"lB錒]L@a`#~%{P.hkNKzi^F](UR3e(Z)Vf;,cРZҨ)D>`^n:[:K>V*z-aGTC%׶LR [:FWJ[WA!:(h́Ȍ <%t$ɌƘ,8MIv%3]٤4}wI4!dR1T2 ҐS3NEEuEA?G ;(?(0o8P I3D Q#)BMc?56BNH݈߀& I J+Lă $ˌBhrRPLwL ALZmS+$zJD4q$;7Eƙ@1G4׻4|ZF˔Pi#Z7 V Scɘ1XVڰ^L-@ jUp9 IL #"dA"L: mNe@=OHD8BS֨\q2G.f~+YTڎpG|*8ƈ'8G J1'wUwf/L/;-ޕȿlN xOŌGp{7zb脨c"zKDKD.Etэ# J5|]1]zG#LT]xfl=B y$NQtT$l>#Ղ"P TtMzA ]o_) ʇ2{͑D_Rs JZR>}I6kQrχ77&<=|fړܛ2wE9)}_O I2]#f))"տVy?/pxڂe)sŀsCh-)>83{1.ew}Zū Wo{*φ`z,t"LƱVIi!tJ8E'dzAT,FޚH<PF #frFAxsZh`IƜ !h`vÍݥ\6ΊwME9ݔ(ZgE.r^Q@}=Ta jی]U=f0MgzMQy|>ۇ秪Jd{px/5Jњ>2|W:ow_}֬z3G@`DvτO 뭹0C`S0!bZ &-N6^?ޥfqA 1錌#"94 m!gH"Ĺ{'k=De tM_uغxvq@NaF62>Q[G`T.=H헭n`k_ҽݫ~Fq+e)W?eˤWIݳ/;.i{\Q}9e!(QLvM j!߼F  (HRTzVxB[~Ǣ,jdRF_R)`3n|_ +cwUF5!&y9r!>C|Mni >!)0hmʭ߂ d2T!FEfF 2fH!x=Pm$z"diPSA' jJ `⨵k7tzmm^xxm{+^~K'+Bn_l-P3eEh`M9rN%HTv B^P5hEz}.qhZȯzvuRPunzFs,TB)kV\tRF7=IqJEW~'-NG>ILc^杢n17v'r6#2&k=,oKIR@/Du]SMY6bӣ}xv@4mbc bѡk6>O U.VUc#3AImMZVaH#0rT 0AuPm8GAwQ)sK9UUvt=O|ۭ_8yř#s|z2w.?8}Iϯuvt\Y9]ybT?\$MmVD5TpAoc75B mF8pFs.a(.&BXϿQ[(Fw'-WjtI- H$"H7G֪F'ѩ(1ID:UyR]:/f_!T5KRsuj3BMfmK9 diݟC[bA eM !py?0ݧ̠R6VM:?LP1v|o~EN;3,hzRDsO@H~:oQHu2\oO¾.u'\G[l+#.5xlA!,]I0|'($\5pӃ)%L_O #cJAQ7*GW`%Y}^'+(p't@imMQE"ibdUGߐ"ܔrgUVLQʺ!=8'Lɮ#1 3*2A!%s`zʡޭ"訌ְϻ&XvJ(ԩR",@esM_ IiBځT DJi˂() Jk.VϚG{V-~(Qx'"ݗpqJlu@^o]\%S-*==XEIn?(|<`mnOgC$IANraꉒ*NbY;Y0r.ǔǔO K _Y6>,G,ޜrao\_9<Sq#w \+=ߥ_0ݷ2AjY]v)T(B{3-@8krh,>1cޅ+eee5MI%&j-aH#m1pV5L@},O`y4X F: 1S /CiB)! RT  X@uZ5Lc)r|܏Z})o.!EJ =.r2܏VldO6f^^޳/7t>Id˯ #?rٜykȏxxϟbj,q(=./Eh`:-0dud -)ll1FQ`4 #8t=}uه.@[aGpq B.S1p݆5Ah w˧߿%ϐ[Q`E0Xɫ(@ha-h*a}GKu&I֏3/⧱ˎ^z/1"P;G+*L:#m(0BZo/0Zj~HfA6HҶj%kK632 UQTE>JV ^*>0>Uv9`We;m';H o/?Dw>ml@3_FV@jfX+r*Sxkbe[YfJDitDϔJɽ58+ ϕ>mXh=@5l+w{>M_.B;q(u'-{Ct<(_ i\ *nUJ4e6Fј U%3SsTV[Cy!;Z#喅h5-mQA(e6r%#C;4o%qjGF{(52ܕD!Sꍕt!,o9*z#X+J3z`SYc4V朗iEe,"¦e6NiFĎ^Oj1{Z2M1F.<ҺF{,z?woR5u:G2gZ8c(4qC#wcq'TE.s<$c01Z3.SM=l c7_'9&~ZR 2  "M*RƢe6 +(g;CJxOl5X7bPEiK @ Q_%@GãPQC) kq$ޢ7PL4-߀WZh⤇Go8dBHy e0dh$;*rm J+՝_75*VS𜅿]Z^ YP{ xת쪠ZXPP.㍒{%b𮪠 RrpșWd?y%3k&37t#Z VKjonYm7 sz[;am`X>Q7~ WtU1G<)Ńh bjB]q'߰emǼP|Y1m}㽢OslC($u 8]4px۸k1"`΢eY}.xz,FOEV1uw]<=zlc|FfB;d[ 9zu0&ҲƣJGQA-g?8EgTqTu۽ <72Jz8eTxka}o0A5w֮r_*W{?ZĠFY.qײGHCUƅǏn]^kGz(`rIQ7tu]ZԎ!w O?~M'gW|t{ro/WomzڍێH: ٗfe춅ԐJTvI_wKWnG6ԐEsh)2nf_YZ>)QJi7It#%C5LqoHh(8`֢OJ>J%N.0V0]s_DŇԶ$n(܊!a<ίghbXXǏ-ni;^qb/(oɓ_ [Y|qE(*\Kn7תpʕC֨DT}I^0nl g;IW7b{Xָur۠tZs;9/s;C(DeC:[XUc Օ O1Ϧ<<$+>/26z-}77}gbdNx \ 3x= Oڝc̽ )Kt'vq=P]qdf D$sU0eǸm\luVNy :"?icep8#ǤmIR4 |N1x]YU|WS#b~k;^e,d^O:qVk̀I##f5;9c(U1Qj$If ͙~g{!{lIo1k؁.O=D g6-it{^eg0wԏ`g:ҕʰ,(Τkk^4ŸA;>>ۺZUEMs`VMYaYfޥodnߥ&-7geFo 8Oq&?Ixfj~O^Xp57TfB)[rao#b2s2k/r609 _=k4x@&}[4;MNnnI@_PT公t@:2ȌBi6B+\iW~6/d]_Ix>]~["4{fbOLi}&nNɁ/!O?.XcBtE~IN#+'*QdȪ|Nk!2[LYK%a!sxW/Ql Va s`+\*͚0 +,ȨPH[ FD2`s^yS:ҡr}FB`v#0J+0_O&=IeIyAۛ'$x !̀vsr9w뻘Xۈ-O~:Me|.oeRMB4|zns` 9.?wMI<0ߺx'nM|Ov^.>=LE=V?_Bْn)|{Bbw؛.n>^^I&#R/=x7 Z,ñQ4Fp5!ӊ`/k@hoiX֪T$B0<  ,4>(P] vq@JjH)\ >HT PM17rd,+^`( ` BD],w^PD@pEA&h$J2Gҗ< P(1RQ6qw?!˳er P!kY|su`d'@OuPMTANJ/6\NzUXz.Uy@k ;8%WWg=SΓPc(PVP|Qy!z c,V۽Χv>W$&ՈL{Rdn4JT)=Ynyk5-C:CF:vCs)P`4JG94iꇴc.}9orH9Vc"S51圣2:VBΛzRcu+sE+!Pq[QUVҧz5(lj"+quPʩa}ZiXeiL[pQ@H *R:D;kp,Λ q+-t.kR-eC{!4mKI;`F%hrtយ X6Bh-ykNdV78ALR(KvR.;S.b q&WYTg񦃸A?ߠ4ZyB$g 9Jh_MaNKEd d6;Y"J J*-ћ[`7SZ\\ό`v~1S &cj/=l0"qV.„N?%L%a3 щaP=jK^yD5c9 mpa9&W!Uyg>&Cf=O.fNꆭ-kYukqFE4eo {$%&$sۙ#?Ŗ4ӣV_5l3R7/_Bbϣ4l[>dG.'JüoCq=/nQ1lތ䎰ͥ"_t71E& wW/.Gk2 8MqI9Pci?N<||IB9h=`>xR}+2!ŷ Ba)D<ۭ e/1n,y- Y<0ژ_ʼnn{|#a?;!xƬ4R5q[D%z(AVaJJ5ڭ`v kSHf2=M89YOG$XDHmܐDɋ Zs'5T6t7[izDSF K)Hh3ll%K dFȩD+1&J9)4L~U] 2VE4rjL0 jq>ZV2B\84Hד>}C(a$(DL|b-ĴLJ>.6#V25RO05f,(̒貱Y) ;\%*%iK/`,K"Vτ_ZQ>v_a >Ye"lTЬヤJCgjXKփcX -* ^`sA T0=;6= }(m-tC+=OR|HI1s-\5_R^kӜ8RLP rj=g-~Ƀ)Mwq4esWP[OCrnO >K"IU1vnE(ŶB5cJ u2{i]@~'FRhN#J6` h5.QS#%JkV?'kW,<wwbA{l̑dy&9ar"cs3V`dqK1r1$l)QT9+|I4<"0/>jӤ 9dk~qVy"]sh(5CE ҫ"e8 F _xBIv.P$D]|J F>lg` )C- P-U_s!E6۹(_, >)L/{yuEc2G=lf/yryryryrVE uZ4kj6H P<79k"dJIosF?|WPž/ 7<}W(!=Nxx?;ߛQAw #D,RP!6te?W++U>5pɕpMRJF[9]Z7o4=$V~ū_}̤y u>~dQ>>fe%j\WߍZ$bh'rxmh<(J *ȣʸ[i \9T5g*%V5S)pq۽x(a*8CS@bGGuWiuTThp]|] )R,͐Ec/ חCp"֢st!0 ޻R2sY/\UW'OP`p5Zp5xb (QjCgFը6W婺f҇"Jk !~x+R9Qߕ./߇uzj R[Mo _:pV5NTJLp; q!t#*P4JXkPP1q4椉Q+<ʆUH%l\__ɭ 1Ku`LkG u8xӣhkvPz A1B.wت/r!E:6&v{w~f;awݼn[lJ?_H I1}u}:MC-H>^~'PQHڦfV uG?G7L-dk r9>,E1t,E1p}T[v|9it{KtN''cן#| zLtzl%cSlɦj]^ ,s%.Nbbya}cG?G3FjsUPv& J?E.Ks+Q5gWkr441y)g'A'֕ 5 iO_pѭE]06D G{wE%pl;g)2a#)(W)Z9ƫβ Fx.r6}n)$~ϥiGd,Trmg\Ij)Fπ}2xd0&GACPoUȒ9 kϛIa3IkD(ڣ ZMP ˣ,\DZ)i _S39i"tʨD9#'t 6HPFKK5+p"S^Yc g&YyZ-VK!h:ӜNr-lkey*-i)IhO-I$4e,Y/3YNcI"4Ȑ[ $',x8X ^Jrf*g&p-Ev^ IZfC?j 9!RCgT/:O(aF:43nS&cbNI@*!cuZ>eXiB(J"PN *Zƌma(xA]v>F f7?=Z^ߓݲhRC||-{{|`\~h Bjn_lʧc4EKh tR[~?,}wn}0HgSө6ra\ y"O1󣚒 A2*:6ZV1WGs 훜F2iaK27Bn!]䟢[wsa~ip<Dx&Vl,7gzw~XZ1OiB~,𗍷wKI/ !MOD0&&8;b]HoT$VvVvź 3 |W-m|o|p(iM?ZG 5!me 8yK>u'螠5-` "”UN{i3El7\Bq`jocGTBe՚IvY~pQ߸Π)A/ʴv˵aVJa^5s=AIFN0= D;sP3n䑦6\#mTtj;Ob/H0雐I >bIyV#%PĐ# t̚DgT=%&e27V7 H Lb{GEeq}ssELa٢yư$`kL^3`HABoġOYk o+y kh FhEjQ`1a%u!F} {Qz̤JХ MZ8l 1 >8TSfb !I%j8DžpxLÇ?{WG/V22`<- v̿ߠJJRYYG-ذ%U*I~'e̗}2_|>׏1B\MJ&\K bD DfGm M:7'*p`p7*c0ĸ@ت/٦V PU˂/RCu|m\WbЋ6T ! 00RFKuɦh4ZΆҵb/FybD>@y^7t$paĴ&25]8:r:MbkDVֱ+4C͡F5]s,u,M"@WcbB%h.Y#hR$Tѕ-榊L EHT,(tU?d) 4o $ -LuC\0!C$gbQ#lL NPCFlq])''MU%Ŏ@*k5Hn(vR0.CJ"H7Myu+9VT{k)[><0*ыWS8{fn/ΰ A=+B7ꝈW0nz3`]XR^ceԩ'TM1G.hL/nԢ[*@\s}6wzNcDSLm%ɋS_8U[)DIэ-y\*p*t| H1¢56RtY`w3)0Wb1D.KjJU26h[ ,`lkNG'΀b-kDqb%o,6 4^v]A?/I4\K'G RS(yrjN̬,%kلN56|BdQ%,!IbR4G /wz0 Ճbɚ'؊ו28X c^=jĬs0 @ոL\Bh5Q-28X3Uq>9K ]ՐeAvX\M!Za*pnDt䢩|ԈԔ5P jD?-Y#ٽNܨF}g@B_(,իƠj\}qH(dFlQ}&lPtBKӇ)r]&&Sf_ b$9Ar2T1xP9r8!gsV>J+OE+gBU[sHӦj Xs iS5\KO%:k[j!U[.i/އimXB8@5'[ujͶP%|l&4 sjsa&N28@L6%oEwpTm`N-TJqV)'L1ܗןp3%y C{.Ç2 'o+hxNs}#CpQjZ.z` 4CS .mD} pl\MTnW&/Xwń-':;[O,ҜzTС-xg>N)d|,IgcP2hU[udܬS!) 8\6 Z*!SW I+"=W!FqRHn;(94Gkfp~54d1R9 9ѴydPT+A 'Db4vq p}.6vVfQo A5Uπk;Moi08rTU* [LS E]mRSԪLDAIix;kwv|w>'uv\hFߋ ڞјໝ_klPGSئ4CmZsף6 Mj /4 @C{g*^Wq]G~8`^utW&>~o&@-W; 1<'Ű4F #0ȷFR7ca_58>ǿ;G=3:K #`Z H,s#ݟ!lt%? .x9>PDzj5!0`@N_MtdMDu8?Z5.`-MNa9 |y#qkms ig<5^& . i_|WP%JwJw7W kK_D܌]W,P8g] vx9Z$qa~e^C;逵3)m]G .%g\~˞K`W_xij}*Fӗ֧{')>IEȞ7 Ϣ諫6,ט@ȅZpߎQμv-8ː w]FfN_^v-Zzٵr]K+`F :X%a <9֚HR܂ּO:064 sA:5nj~|tri}Xri9%W"E4 l?M"f# B"T#~]\K{hж] e~7gԅb0u1?$}㆗u%<01 @cAYɶh+H_Uz|4~^/~-;a<*H'n{pa.[5ve7: 7pO@ij!{uEf9>)2Gh|hv[S%bߛP>GsE7C)=KnS[ z\[ȵ5E:ε5FZهqmZ=}t)nƞHTsHG׏: 4Jl"aVbہzx/9qr-QJewD1n3'X j $^=Y;((N6vڃ#4SR P<,Ug:;5˕ܮqW,}xo{'D?gxUzs7Hܵt8pxVs#e q[Vﯳ~NزF[L4FΆ5*زf12$c˚ǐd-9ٹռ0 5ιSe09v@G']^g>ܲ?sRȔ…GqN|~c;b0Qٲ6#ǘem@j]9[ ra t h^9fpsș=2GP8K\DNcC6`A?nt<[ְꎜbg#Eˇgժ g:ϖ5FZb*GQi_빙_?^VzW.4'uut9zs=f^z}3zr2W"g0qg7>5}~HtdD?yRVk %Ԡɖ2aLd;H>~Ϸޙqƭ]=Ywm/nD:YgXY$~~哣" %@EX+{E:,RolSdP PBC;(֔ P G.E 9nD}ϏW[ ﵌C"}F/hQO=2/[FI eSn7e8 9 #=r%sx zǍ+ e K9}6a!@آF@!g6)i9'VURai3YJN&> VyI NyJY\ >CV8@Q8-g]8~8\*9uū"A#2/2bd,kdŒ5l`YK R@Cu!iq%`kQ䌁VQQ"IV29)s "D=i'K'G` YRRS`SV#9sCbCF1rlzY鄓ĹKډ) 2xK%LC12P`q)$:*!mq /hݵ)1 ,#G.pΜdv:DWFgmϸi:[DF^3KeR>±֞UJhTB{v',>DD) S!gJR#CfGb"pl̒Q3O^@\O4Lj'ԇ' O1K1HV{(9< P*P̚P+q;q>)NRTnSXU䈜Ĝ.BNWHAα|%P&v99k jœʻov%1t du,mc(A2"K6>L[!=m!}sn0X}& 'r !8%'~*& Sk&&H;,6"%VNjs5bV![foĎۤtv6띶 ^X|^fBf;zsڗj&iCңa\%yN\ܱB6gmMsu3 !U1,tg60ph,Ŭ(XT^j"N/1]4SZaQX /IdbW-'i-'iчIT*S0u (XgLBs.'')wxMPP[g7Nʺ޳Ę={ɢY:rvvœ' 8SIU-'%KC9SJ[Hif_Pfbzr6 'dBj5Ĉ,$d=}EW IW[ư ]9[왃LZ#6ctX(tv"rY"҄Qtd$DC@H9ں dYAH,5ps[KB-q=cF1Ĩ70dYBcCaXX$( a tJ&CI!ԺΊdNHXT յ6;s9a9nk&mMy !EN%΅XmclklI) Ő9' ybkbǙVEH]tUZۚ)2JQQ`} zvF9Sq`38h[vuVвZ)2gBb=[ΏMː.J!+KهsRT fB߭Og|];C0¨|f8<,l{o̓Bq [tqwsq6Ñ?lÅ`d`sί0F-?7? Τyt(fNϵ]Sx %: 77_aMN⼉ ̭y/B AμNv %A>ug[D @7/h_5YA2>y綂}H*ZQAQ,*>5Q7g{'A~> ڷjA0ZZѾ%F "{;!4u!I}(SdmL?$dUn5mLMHIRkβۨ$ļvѠ9o&_kLdIRk})5M֜.7Q{Z j2*5nƎsZ&jӌy| 5;3خ1UW1/?YTp7ivuS_#wOF) xn{CȎd RŹPbJ^c,[]D;Fk-jy8r_~uu]E%/f;v0|7Ol MuX˓o)wZ)3kc럴2v83l?93|t>A{żNB^ɮ-XKuB h$Y/+8Цx"zN5GOMԌ$ F qԚv5&I9ԷQs4IjꠉbԚE59Mj4{hsy6ikjh8W;!00Zh[{Nhxh/4$`W.D6'WL@(("2 'hv`ifU;)V~ۨi{4{6jnԚ|5fViϨ wgINȊ{vfLzz+2R[ A}eB~0y'FAF0>6 g͇_AhOhNp~p  A}H8y|}}^?\u=M})̯A ([5ٙ&sOFm+P=o =̀R)G#\8\ #h0IA" Q(lMՓ<c[.}5[/cFZ3{/o]'$Y).萵Ewjӻi z~`nVNe7uKWXX,Lyfbu ")pſ:w ޽uп^>8/Z-*mw_#[]LVtgvolW˺qݐ>{丼/y z6TNSm̯hMp!mwD`^%mDYmPޭޭ68ĭ GV!M gW7dXmG?n' 9vK럚݇G|r<6_8sww<˛jN<[ܯnWo8U]qYo_`Yr?dP']*bn79pwzue7 y[_JQc֏߭ͫrgjލ`% 4'04|psVP'"߼7?z24OBl !~y;̰iCa|Þ0z^ MT戱露Iٱ O;̞ @IZ0:ߤU2IºO He[ X9?O }LgPY'4)RR( XkVl`3YJ_L&*PV㣣LDX6l0c#y=Di#Q=RpUd2%'D>@L!LhL3'c*'b;l}R0eýpNN:Ƹǟ 8k|޽5;vvo vos+eϻ;yN@ ͚.c_{A> ]^Wm>ފֹz|7/K'>~4}A;/KeNxCGghr8<ϩ_?9%{U}[%*T0p)Ai.ӗ@E(KJ xSYzcN }O/Fk8eM6e ^KldCpGcb~=CGP9sњ2`H\^_Uש -/~nQ/'/쾶ӎ4DMk4tԽz44TC'sᗓggҕ\B3]S=fˠq3 :ˋoF/CCG;t́>Lpٔ͐!gn#ioΏo;U[S6>z#Փ;a2fpKQ'UH+<"}-,̱΄1F\Nӵ1?p2uphۿzr-&cjԌDO&m>"qx"2xt4 J=BӇ*E-dq YV<|N![ g>ul9ǃ탾.ڷsþM˞u'{hyN!g*:& ʺSic5;(q>JC+-<חQĨʨd\q~&FE/H%dMQuv~"ޕm:}6ڍm.w U~y{>r/׌** 2&a&ټM1{*os1L/r(F7!{WFDs\OL 1')%)P mRCHYvSh7FR&/. g0(2 A(Bϐe-;t҃B4x${WMO̘]C ]j B>zb,B0< T'_?ͳr<(Dj2#/~dA&A9H4% *4`aPQY4i5/J$ݹ5DaE%#}3#8OZ 2rQ F٠" Pe0wO0Рy`k" 6`6%8X#3rFŠ!}jQMDuUcpD+&0 G(A6P#-$!4 pNhD6 ,+-TⲊx"-z#4 }**kYsQϳ*#CSN$r2In,,D T 卣e4K\%eLsT1`!ɜD,+9N,gB$DqT cTdT( lE]ߦh8Ix3 W65\#ѹ/0ЮbEDK 04a!s '創|7zQZWI.xeQ9y`ʇ]ae6P#e-.{_b21H7 R +V jIsH1 Cv͋S-#.kեA)7r6%=9IniIY6P]q+M}ETBsHz <&3֞=TbY$R4 TknjBDA®Af-v>X 8_!v KrdnRHJ DLCeEgaG,24PMr$C)*ROfm(J FIPuC h[5>é|x%= Rm\z(KC[;CtTΔa!q 8Jj@62 ґ)$q5P]ÌzaH/9Q'|g,EàYM6صhlgTleBYB| [V~Gk/ ~Bar̶պ-l5!9=+GnAR^frsE~s xN2ajtI ګ\c`dPAKV>v ?u1]'1Fq#=;)V{yrOy't`^X>"'A%I)*dBLJjb (4$\}N7PiAA^IzfL-$G(A('U.rW E fO (=;9lL(J2lbtN*;`9(1t6pm D|0n(<%sep@XW[0]蓓vMY)|]/e{!8 U; .(2-^@A A)6LȮPvLqfZBɈi5CjD)+`lsZNgRk`%n׀ C zfQ^FcqbJM'! O njVEW a+TᏁU =,ʅ4!9EP(]E I. )*Xt5rۊ \dH6}:h }CmuEv8mY.?^嫇Uoʿ}{UjͬysnTq3dcp_\xMW4dpo*iaZ`Zr ^ZI rd,BGl7O4 ;z~B&$*Tq\Ijuy8gW}ο\l[U2d4"%ڵ ƯU e]J I-BFI^E+F攩8gg^۷TA@S ,~^c'zu).1 Q"sxx=Ĺy~~1J:uRzR዁rN4#Xt@†} 6.(|lb4؝M?w-յh/eGhi98閒{+>8Wf*Cg$RTz+'D2#g8M GP۟oP |ڐ-ߥ8$E jRavSz).I9:D3+3KruH.n- }Z#aӟ#SPHs ?-BlAuJr?)*zDž\1ox\PfKI]IOq+s衹O|6s±?)w+靲pzAӎ[NwұuzIHujD ;3jΏ fFhC/1ND:9>2ܪΡ[N%yL3K.ҋ0唙sf ou":x^Vo ][Q;s7y4J3fz:7$ϭN6:<4/}sgΎ0;yzHY;ٹ}Α2bC!978RH/'ͼÑeX*s9|b/хCr퉆Yu̺4zg-8A6:u;<%s^ӊIH ЄV L ,_Q;eRf̟mMr:̾e;67I6$I|?-yӋnf+Vrn\Зtk̨^ۋuFϛ[UlwwG|Muzsiu}*2ݶ \v{Q[y{uq[5+\ &̚Kɬg_Y]\(!r~|[?4cWoro_|=ICz9Q'hqZ{;:ʯ,CgY >|]oiW{Cu"BwL9}aY!퉿nWW׷ kLǫJp| %i?R>#{'#I2"I@o-$-E^3˸cz_r'HCAL2[HrA=,RiX*T8n\$< yΡ%/9KҜS JbJCp9tjqSGGl%BO"%ʵ#1)j.n䆃NV>P$J]֥[u~:-Y2,\jbexYZj_ðPL$4&z8$- |:x|u;a8>;3 #4;P-i J&[tO4yz/_1ӐyCB/(81NR~SJ~J,{%aYYOP#V;jFE5Mh㯢FPL㏠&#ySU2g9X_/Xn.\"mhcѝ[K]NC~mD$- ld>QR4j"FȣdAmݷa- ;_4q0*ױt}fK_#5/Yiy tBQE;镺NѩEV^pI/R@ jy'>DIgD6(-uV`%h5& T{Ӌ#5wBլbVrZVj W-vxsO&-HyXt*LVAfy2p}]|7G, m f UW*2CSP{瓓F*(;Ƨ+(ZÔjD0~(t>9?AкԸbP6CB&چ:E;YV 1iָ[aR8NT`Dotv2|ZڲG֎M}JQM\bDZ7`1Љ3Hv݊&=;voY?r#=;Ijm48mخ{z'5g# !U- :s9*Cі1 (G<B%.up0kOj$4*ogFph 6J>am#-8*j#BЃA~x6B-)>Xnjg _: 0 fxzai?<7M~]}rǚU95/a]wO7 3yӷ{'kHoϕ7qy=7|J]Ld7ռؚxlQv(լ门 w(56/Y;`kY4g?,~^Ƃ#5_ͽ ccf>uo~&p:Ÿ/RIGw?{6UWx'=m:MG|9}Lg)2F ĊI"Xy [" gB,L;?|(%rZ.2pEIv3KS·?: T Oe5gRfg)} |PZI@n08-D{ͷ_]Ngqs{u <]R:ݺp٭`ZrٮJW2vu:1"X}!:3;argoLbXU.$.;OiL줭j7?_u/~3Ë6}Ɵ.m/O+9xZQ΅w,27+XI?rڪ]/@ÓVŨ@+Uhu?sεUhչ miαLQ^UԂbLo{IQ/U44%058gI +V] gካB==ަ;ݗи_Aގ>y; Rs%a(}~Cn?x7 G?@ǫ`8Oxtw§W|u,ʿsgF O0E쇧{x>upo:Hh"DH_q5;q^~ u/UcՒNƙ*>8g[y u)dOs(?-8o!W)*WK'#LԆ*X ҤZAq,;p$IQ;ũ+DVL үԶZ:L+T<=zpHg%t𼭗T9N6La(BI-3/SzbU X)"p#dm X,F\1} J%ic"F2mp27XˣjmnVȳN bTda6ccm1آ`PJbqs=Ҹm 3! <8뻱jLڴwHi}&TkQˌ ҴTJF,u |,Mڃ@޳N=(8L5S}"<ާiNk߱Ĵt=kreg>'ڣ5;n Ɍ#"Rt NBRJ@ ]|e&vqSqN:Gƅ 0GҀX)(6T'4~]a<<WoGTRO܍?{s=7):d(b>/Wu@V}9 &ɧiM_uΈD|̡^ w363t?ŗTcH#0a n9yuNaB$eC<])IڧP`! b/x.X 9ˬ;zDIԖG0_{N~UR64nA}}pww`ZqYw'1)}q4E fG Ă.fȃړc2د@/qHEpr*4K@WEүaS@W5d)gG[[0~f@uU%zIjt? ^/M <m>ʈ%NȜC )Bj-`ŅQf+b)3> ;V=/Dxv+_a ;&j&pV#LLW !c,d~s}]-_zK_rRK}Y+LuOea4]c 0dAAI׉"Ytk@`9byn_'F@{`h3Lö V_9=[6$9n4}Ú1y96ށ2ڠ"}10"Rvdbϫ;6V{ݽ= v2f?16'$hw+ߌ'~/=72MzZyiM`#tZbz.&є좽%(o}e.ؽ߻"iC0 {xTz!)̣,^^jAC6k5YOӇ*aYUY.Üm׳ŒFIuvblT`N޳Q)S3mZ =عegu>er'lrY/8Ǻ ~F!TdETV9C}J4Pqx3J(  kc1щ|"H%,1D\cʱY9TGQjTL_ofUJF^^>C!hG#r^b)gs8+И 4qƄbpWcdES9*x4mZiYܦeq[^44X(g(\X0R}P":bKtR-%a^QZim*TC8s1lӨ*衒 91JʀbBK` < Bփ `9PEJyrR`19a/Fx ~}Ǩ[dٚS4GqTG45^5^pUTF飍ìHrkQYQgp*p&p x-9wDXڌwJ;.[hHsX( "5TRIϊZ_;@nb/ Bt rjGU*?ey㺗S²יHDTWɈƒY s*9a&D* Km@VWNzȩ y0NHBBpe:hY$NIaƀG8V~S!-(FQ”t+/0@$4.I#`3kNT s 'IL^1 ɀ<Ҟ8BԎ` 'TA`#|2ɢѥ:^',_q p R/7* ڴe2jl"x8?Lsrhp\-&ե;HI 8At/:#MX zA%J ~?`puZ(JuUb4ްhdLU*d/LEeA ' 5'#6&5!",À)o-@I/{Ni{jeW˼>HTS&*ܥT)`V i7$zɐ Z5󴜲>IrtB\Jf`[b c;H#D3z;YTN"."U!"6LX+2cDA26&uf DoqUT\4&5o2+}Q@<] @a@`bZjm'@ B^v)@W-*q$N)Ei`H |-8F+C5Ki&g ʵ6\ Z~­;dLPXE[ANp\R@[AiA1}DA*,>}E6|l\Zm"q> JTx7,4D5 *uZ)`ˁaə]Pϋ^W_yަYzfmYZ.AF-b4C8Da10㐓 *T*F֚ePBu_ٯޕq$Be1 w+À,`/YyP)6ٴ=]<ΪSHd2Ȉ/Zr9]I[5ߕtCZp1|>U×!Qhy%aH%5I{پf0Bv^ڹM.alhh6Azw}al UulPt Ig(1GK?|dZDƖ \9I2 /xN\kw' #LEzjjU7)͕Xm)Q tzzbi$b;ꠌvB{BVW+ oǍծOvwԢ> %*kQ?~V,ZO?7泿LmҮ 3XY~ճԦ$Un24EXLU="q W77+Cqmawt|5V\f3J WPT>(D@c e^A Y| DߥSMo+LBs1PZEa}6[H Ѷ83,کpON/ LRV_ⴘ4QNV KRT\3S2JJ̔i݌D4\LLʁX^ZRLQem X^#R;;7CLZ1DAK_BTY^0%Ǧ-zKEDk>2')bLu^R,#F9iy[h}ב@UqY룋Hm0S7M9nT}6/E䪍S<>07gOdX }|*G1џ|c՟T X0BXo!0xz&e@ܑx*(w,*#;Ɨ"KS.E*t'mA "3\xitV8*֞"pKcoXDG!E#$2\Ac^y*lQ᭄LʦScF'huˉBR0i߹VJԌA[TH+޿9=Ij 6$7gַE<.ٰl>}ofx0۸.&u}{"qb)^:$ײxfz:f(1WϼKHR w2tNGD<A8,PȘ LsNGZy$>i*NW:!:vUie'QP"Z)Rn(Ojx%靈r򄤇sjE&sj9 hKPZ9;S5"Ω|zp-+:WyцtNik:&zxn@*[^&zҔ))vk8A61fsҔAS\J 2# $i J2o =8N2k`9_,cq$C`EN2؆B]1ytOdhnLBpXV.eB8ńa)BVٞJ621qΜ$8 6/=i'M喤fGg [ڝ!ndz6Ii5c|qÄO/-hpF[PUE AU@U=jG^ -i=BP%' vPnATr|nsa EEΦîZ ). EhPD*7kַD2&+> "t8]d\6[F).[Fmi c*bK')EXēfDXF`͐\1H5(GBBL/"`BA@qX0I 0DE 3iP  TPRa-B~YZ렑`9N"օ28<l<pL[ q0mP 6oO;I5$Z()E N;$=rHGw¼ |ЖRzPK=oO)op˞b Y AH.A~ǹI^( bXSȓ:F'zF/o t4=OGQ7tyf$5sdӿs Um81BJb| C-SA"@@X:B;0BG ߆ 1/!!߹9tAGz WhU)DDSIhh 3~+h$0j WpjY|+(,+P=琵ްKGة9^z"\0B+ fH5r4zǺUʿԭϩ[B 4X}cʐKzzuYӤG}I0W;ehSozIC+{ Ñ B˹U}9y_Czp]zp9^ću~Qm8I9m 0^Q=A #̓+zV7׷; +6Ƒyj1U_Hdڎ;[&DMIP;FK-9g9!P5C _VlÈ@x'j=i3оdy֣{RDR"2yhN񈧼G3O>-;57$-|E,]s\lĔw>@{!s ^Ys(_64|=*0~ WNP]r5cLrzliQJD Nl/icE餇w\aNMM=ۈ~`T; Tgmٵw[vۧq1:ٲkQ-+EihŖpx˼=i\p(ՁtS_zXzb"h~Czo͑L+QTZR{&>y]cz920hPD u,EJ%yq՘U&οQPSGdd{; U䥱 E9X ) QWuJ5Qij}R)mˎ)Qk!IϰiBiF^wPrd#Ͼ ѲY->Rxiel$u}\ƊW-BajEp;h/JvRg_}^jMY.FRg_=Zj YliW?"[b}B{|WZ-WZ*[Lu[KBeM}^j|Z@jQi^]n+O$vO>Jl.zb}"~|Vl?ֱJq79&Y]_;7x*_1M6߽+sݫc }_Vn#f2Si'>"m]aB_F٭"wEZ|Uփ.4X*JB^ؔDd%0w trh+EVۻ[M4˦8|л[.);F6^ 6[DS[M4˦$|]лI1-wotZۻeO4ջua!/Dlx7Mwٻ6ndWXzdw)~Q|,Ve:>Mf0'%)ŮTi!*N.N$r0n tbQݺR8-ZPvBDclaG+w.!}gnB GR#^fK]}6 VlrM$9!-c(U )NǀA}.Z(PyC"Ǎt2եvvdM=j8)WxWfեY^]-e_9tu $rqɚGd?,W9dЦf@yp[ 0?}g7'f>l5c;KdA!zl綸 f?}:de篜 ]쫫`"4rJx!m,h[dw7ORJ9u鱎W";Cb.c~\KW:qo8̜<UBފx }d𖇱R\B_"^JVz=!K\c V N'/U?i]4D$TLN^ ~*0!a'd7cAp/x Gjiigm^#G QqOAmovOXy@U~_\,h"/y7&յh{Kْ/uaq/Ғ^覴CXʖd#ݧeUtZh,rܑ 7hjVڪ;qnس5gdcECK.sX3VP`#ُ Z4V01V{m62 FR%i\kb@߿כۆ{qF/ Q,Ic/wtx#ܼReȎ( 75H5V-!-6y4C)r3$\Q2۪; ?dHf^R]݈a#B mGSu jBy&BvK!!To7!BiTWX کRwVR-ae@"`Mn y&%\j%vH.K %YܓpHm1p)X]+Y3p*Ϛ-9Ɯ@U{g7D5cQna4RFʎ0@kwxX#%|[Xkt]VkcRxMynw@ukvcneּatւꢉ -H'OZ9&a5fjZ*J58ڝgUPd+1]9Cvfsy mW9_\§]!yU9j6S ̮+얶 \5F#"*pX+٧Z:Q|a J(}JT%$Fa2Ɔb^| 5m WšQ2Ɔ hޫ0^ J`4~5:֕9FJw=8M+%̷ Pbv;DP ?t$6*cOlsd(er7-C]6'( [ɹc_b ^e W:! +6\M</slaV 9OS=il Dk-7ݔ ||\^OޜL˛OYAT{U^X4' cJW'SBzb$#"KFפDЂ7r~$0~,)A!|UVQx5ey}3Og?ӷWeLrWxQYͮᗇwby_ o~֦BoTZQ1n4YPM q+ $fA:ncaE+\Yx%w.8ؑ?k5 b VOq` ~ql K ^Hu5T6pǏ 7iZm~I ^Pj73{`cf@|x0_{v5?].6yٸXQ\!)$4pXQ4͑Sf4*!LTks/r+BXERը=]uE(˿ASFOSiV_BoO!dzT7"5TݞRYC3u4O[~i(ziyT\+Ț]W.rэ z}LQ{}Y9 ?G7"m*uӈ(ŲHxYN~wlbo!5`^~ 9wg',<ɬ;#sPblҌaNJ[cQ$'So\D'`[p1bQHbkH'RQ0FIs&\g2A ؐH`e1o/]|ܬ-#|#S~{ruqHʻ~ou7$V_O/&ޮlO+xcrudGEc$֛;ux3C^xZ3Af7-f,bw7HiAq>uqeo|!w4j@[,Є)@u/e)`ւeU:Xn%ovi-߷&8iҳ qqWҭjwW:[Umfȿ?qȐixc#Ԕb] gn?V$4pe5-VG4nɠ0l. A>#?ZBR;Hw .qlu\w%<|ͤ_EMIprǰi'G߿}-H7,޾ǓHNM-]4'ngWi9˩Ur])ޢ_SeXћ` ?|>>x~;7즖NYv1>1n_)Q_9Ţt7=*{.{[lz)M8J竆m|7fZ}}~u$(F2g=)Is)b%iG0{ɘ 1i-h t]ڋ|z[R'y;Gg}u^ٍ/zJ Cqr%'6!+DMVlIHTuԩYT-TEnr7媓vlg=%5$O+,b.l9h,M^V^H9zZ*r(LNUh$gF Tds"xhUU:P Oj@r1Qֆ[Z`&rڈaJ[A쥊67; `quTdfqw"cI+²(TM,Saxd ,p.R%c8%ȃ*|!"M-%*&]j6 ÑˢiNѡr5V;cV%ujBPl11OTҺC_;8T#$"[|E VfX%xU}Q  B;8RP˖cpn%%<ei[[1^ j--6^08}g_DTtnJ1o@PIFf  ʍC0,?P"j=[/Dqg o+nXB]a$+r GΜ ArǑQ aյFdDXHH2cB|~p,p&+[`duqaoPYaVt$ɱ!K$|1V 7yYN!R <nTT(e$pFJ$ U`@DQ"/JjbiJ8bH18q(>E0rD([{a,a}yx{#Y{EzkO>)s \"0p"CNNEkHX9(&\4x5h8k'0978y)ЦTf a#[a4RJBeVĺ%H=08:3hg(t5F BA&VX6e9%JX*\m7ҵ.ɗ)*U4rUdE"ZU;kQfQ4v G[U`b 4W@KȲ8ߌ$f!"dNlVp0'g| +6K.@hKW `AdEa 9$?I_hi÷z!$S"χ'5ɪöDN)c3Nt''HFd[O 3 kR /`r8s" B㛳 7|)R1ஜ$:sim2a[5H'f & 9@nRUj]s34Gn՜6"[5FsѾ%TXKK8E渞N޺5uQWR.3&гר&D$WîׇK|˱ݿS}`wO;wai:ER~`mm._]ܹ&j5wҰu'b G FvRM׵lbjR(G_.m\\ޕ|j˚N}b݌Rߖt]ѷ3 {u&O]ˋ_7xEsҘ[ xM/"[1h^ۚ_q{zoZ+$8hryVoi[?^)5=Rr/->O& pWyE[{X|uى~>r<*ʙo||فרxKH'Y_\\Ԍ^B!0&F}7 s DByI“HaS`Rv")=;.4ݷ< 1Qpv6ӄ?;]нk?RxJ@ў%(JH9WݏT3]Aì'#쭇:~"xN:$ZBO{}"_y'!qwj;.VBo nʳuO-VM-_#fR*$xc6-=mZzڴ7߀x8Q$&JTMeN?&u6Z\ڵ?>z}y 5gN}}ê.H0aQY GoJ8~stG|g|$t;*}ծ?'ρ1'h:ّVh*(&@ +bA+DFj8AR2Y>Sn,X^Pi%͖z/W;]lZ(([HHU+z /uw=թ]Egix3Zv=~4ŕZecUu& ۋ\c 5Mm^|w{O7VڧQl^U~t^ !Cf[|S='Z1sK*#pexIWAHhPrOD:,`jyΔ +Og?XJNwfl\ɛ˫T7WzxIQyQV}|bv}|u;=KUr])ޢ_#Z;rOIn=9+iIbMJ 疓‚][uby!i+f96d#n')$Hẽ;۲#1%o#VfiۮReJ7d֏Uq;/e݁܆Y㥯V ~fhp&ߤs;iZ,x2L~4%KvE`ÞIAn4Or,3(kpc T:wn0:>]Jv*o.7=;W Piȧ?SĶs\%%m%Ir94D,yxPQv<)|ٶM5>Y^ 1ΗlȚ$X,jcIJXҕP4^} SMYl/oafyk8'F}鹈7KwޑXEsx{#'ykwï:2Z87kV >:^>x]M1frqҒ{ p^W:D*}[Byݛ՟ۼn)a]HROБc$?5M4Nr5BElq<0ƉJ R2AҺK|"laxn%q×3b/)H!KkYݍ8Gۢ->a^Kz\QDZ9);>ԵmMth"Ep]D!;ޭm%zu~Ȳ$f,9%`IJ#qiSB}ri'E8IΘ%RF gq8~{ט8_-g8%V'9Ny&z^>ivNf%c' + c;$.q:d4Yݻeirۚ 8cVI%4v۩Fp7+%Uir5ަ2TijcBgM:9 4̲F$|SښpySl9S^q*jU4>I8.fqP:K2/q* @#L>P9ɖDloo|oeDyrcLIse8,oq<;eZ`vW=EaMݧEb:Dսg٩ S:g>sq{)@l'nRO\"qoNIlSbN UYaw:F5А ФآoSܦzsidlK'z03kZ2I'~Nof$ڸR΍ fk;&I5sS:PjYJyd [\!e~uO.ѝ:ˑO.3xQx 8W wX -]%(.P15(zXMNm (,A˺D{FL:% Ӳ.;$#Ѫ kC>/g攥y|S Op,KGȢ^i)/f{(A$mN8L"yC: 1>fYoǒ&X J:CTh܀[ JKSe:uIm91t|<݌hHY:(;yqj:lZOtRx_ѽ6>v8~OAw`9Vý_GQW7}Ugys yt]z% {KCF=jSi3_w;+#ě[pEO0o /La:n‚S􅾋R mFnì_]B^]EW}oſi<>WҧM"n rd@$Uy<@M;KZVeZ(Z1߭YF,f $mX45bJ Ff{9 g*e} gfĭX?;3=uvRyr@nޟN܁8ׁȖ=L d @*d~ߛ @u @F^Ȗ`eo RzTi/"Ϋoa5ʝU"ag'~#&`AsckR&,U^X[Ry\S\Ŝ4y2O%γ9Q1@WJR)n10Iq %zEZkw:Z^ATB8uk^ j02#/^:uU  %lg=e(WfyĻ{ [vYT͟ʓ8]Vp^ll$yliy_e}<:<(}`OʋBj삶mt5oWkM GFؠKsP0=fdYf,B™It51`,+l9tNMTOvwXCՁlw= d[HS@ dE(iQރB.fv<"jJOFdKzOEO2M]~6sp+͌NW.8etFr#ʂ9Nh9M)W0s,Cb_`M1+}Ѵ t[UzIbc]IXbP+4VU^T5]5fũdY* •%r/QK xlW2tqO7c& [΀̋f֯Sm@ L༂B 9ji b 2d($i4k"h,j[^w5r`I[HAC5;{Jy4 S:7\xG}_?PjqtztlFi4C{pwR}2r'Kq4:AzFR]2 "5АmuF!49W45۟>hVFEAv&x؅'4)?16{kꠢXUm$oN8z^'$Iqx(\l&@T#ӳwLHJR)xSRX1.-Ȩyݐ85a..EYWX ̝qyCY \E]D5.+̣:NZtӻ`B Y❎݊(ˊh3xJ/LCD#A`x;57Z(-ikhox-y(&6-h(-3FvenEp1YjOw5tsh3 >waRڌvYZqU~^ʢF0y Ygۨhy6cr;3ʲ[Q!JRT[.?\bg^hzŕM&@Q/iP/)%(}@CFC 태 MUӵqmZWQD"J7NKu%; ᛱdѣ6QضȮQ8g|{;F)J#b@xE#KDKN6+]Bo/?4o:z):pyU񸆝`[Ãi?m?C O/Gv'WC/Pr7/ /;Ikn/S;{%owܘԿywg?}|xmI@ __RK Z(sYwB* <։נY;Ipb$}'Zl%ݧL^xNKFgeʢNhYM&(Jhr˪kfΥM1TڰIUZ;K֏KѬJGUij+)GQ]\b#j4ծ8_󝶦;Mzxwrr͞󝥖嶎w\ɹǧx3(jE3ʩ6sRNFhY( iĪ-3 d d!Ȇ}_MH,~͚Y4OyYyFPRV5rhQ[S>e$wKgQ[3pˢFxY6ɢ!u ڠZ閣/a'& On8~^~>=F$\N'^4>\\?{_R X~Ity#UeYiY><樱?ܽZhDh@&(i}) VzMCZ5h`г?:Iv 7dwm}̒b zvV就[ qׅo ifi|޿: S蕥NCHCe9tqQMY弦&زh>vO珖{̣cg"ֶ:ۡE+%utDoIZ%#Y]zcv6UҀ ef+' vJX:5|(z}b&4цt#4m@8(LCu90j#vycp dOxZlD+Bd½jm!,-Q*#YMm7:nn92~¼Nj8p%υG+y2TRh:ث(;kq ׼n#ـC@#QѢc_مȓm;ں|,ݯ8#[mYDn"4nE&_HI)J tXgU.x A{b'v2_О? uHS6P&Z]5>`ꟴ1A',u6:!r*%0:! 1B{^&s:!!h/ 3S!hmDntB 'ڋ4;_ٗw4(Zifi٤L56jnԚl5#,5k6j0Oj2tʺYRkm<56j4Oj͒)(.53y!$KC;A5~}#% qMi%>|XsٵZIs~}هZuݯ}-ջݔ[=>Tkz{NvkYѺ Y{q:ί~[1U~fvTuu+r-К^ў༇o*.1<kơ$Fg+'=cp/sW~uCfڄ??Y4 ?pFpVW4kM >V:_~/^lqywװ;X&|stҭs!;v߆븃{~ѿ/6gIٟ^Meh{-%s{֗;[V{{ދ7cn>-H6d`N^ZD*(cOI9"c&y%zۛ^OBd9`LJE.ڬYf@-ъ9"&BҔP Y(1D.ԧ= bSƚ59eM%@P|ޘк0*d!#Q,H.5!$?f{lCh?B65 xFvt)ިZE2j93%x{fIj5ؖ{mV8TmF*U5qܯ"Q r|* EGli dSeRv)kL2au`|uTwA/.tCЩ 3 EY$1џES4W>Z*l5. 4| oEyn<ڨ gQ>1MU@ mfMu.ύشzʗ_: -qnK߭[rȯ=zCjɡ}bc=Sqͻ|5ˍqmAˡo|'n ƘN5ݦgV}vڗGz/n-6涒ȭ<xohq5y>B°??;9pQ_NiY Ie/1zter5|4ƫ$\8sk5ܼu6)n~yFݼ/4P"ܠ/vkk]] *e,ϖ6꿝Z<1?|_ΑhURc>cXF"/VWWQ_4/Kk=~*Ӊ?]\G{/ߜp+/rksk1\eX^@vr,xAT>?n>l[ܚfС-Y}`X>!kƗƫ{|&9G }>:0kF)Jb+(Qxkv" 88>ZpT;pY*9qG<2,sT?*S;Y>/Ѥc4}Ҿ{\NQXr$p[@j9IYFԾn0H0au3N/AĬ S]Xk?9`qޡA%׽ޢ(~̏(kRuWT:GYGYWE9=햱P}iKp<~7CDJ q  QVG oqGVOA 0-r` 7\pDb%R/5F:6VjY}PٓGg4V7;Z$T!xMhAȍlLLm..p ȌʗID:C3r'{kkgTԊ!:J)<g4 `)&tVڐw9 G3XYaHC "QQ97 R;jd@@8m@~P;t+TzN1!]C@#zK-&РE)b|'@Os%}hr;h/IͦE { cG̲p}O(GV]IJ F%zELƻ#Jd5aSfd\[KTMFok@<)T6"_cVKD$|_K'>ʂ +SHVTdEBqtmd/bرɡqDEJ<X$]0ZQLe "8 g˘( paޓW+E20z#%a\ 4;S( _xioC\o+)>c ͦ0}e0}eҔ(ĄzBn B+i4r3|.< r7![L[=:1dSbsN̙kL]H3pB;N߀8!.i6+Q:s>JŜ]ҸfKvCX ^O9 D95O}"+8F&w/a،?xC`bofborjF êl⥧>b6ž0nJC5͛WV{r;>=6f>`uдڴb =XU)I\Abۧʊ)MRqRlw!޸&`eѬ3yˏKk__kX/Ƿ'W7t9xk_JOeRdK1 QK/>wAS[59٫LK?,W_Snެk47|3>/&ܒq9]1xfxSؽe Oa;эJ*95_k{@c ?aG«uRw}da;X>i,K\^l?ь䏉t?Czi|Dto7:׾~v黁w/} 8'M~v DNy)n@OGw*D/)_ץb1F^ר77;y|UM^o׷ۯ'ճzBs./gz_ ~1x3Ir^`Ydo[ںXj 0f_bՃ7×M ̰3'2pl!8cw\i/ou@FkxU^t7o'r d/DMua37DD'[F;G^N-@%e Ѵ ۵`U=h k,"[%lB >^"*NT8;D'<v3a(4ruF6K#)Y:O!>) *(;_"]3Y>bQ@{q%Hh,qC1W*O4=QucT %HWFƒɑi,T"k,WWXqݮ* B՞8_Uq`ZFUD!;x+@!=&]cUwt()ťdTXu㓥pKL Qiі]רB] L07kpԗJ"rRw51Dp#y6H@4 Gí*SDL%\Mʑ7S,FyˡARLF1K3T$)i0s=.3z ,DZ%*EgþhwҞ<+ (E=xŻ'(VEዮ0T,{v%jDWIV6\*\\V9[۝=FeMo]44Uuꉔ#2fV&»,P̱ho5%y4 1xGkC`֜8O5|woOT34Dq`8jdиkMB^+(Pa)Ȓd/xLbK>ݟ7rwC5xs AnbR>IL KeyNL%K8=-&^;瑥Rx 5^qN7 h׹p7MkpR 榑 BoM7 &HPpΕVԤJ^S#O$%z(d6_"Wz!Ֆ'MdJ%I_afy+_a'̦҂#)OF4m9qD @3C$K3*`HBbrԄnrya4Cuɟ<1QŸd|8ܤﻅ!~_F8gv*ZOl$p· t+?FC/0]saDeT8!Ο҃p5 Tʴ$ >B)}uU㻪h %(VmvU-U]Wgp$ 8Dju.#*+){4ǻ^ϰ\jS^jIf]UUh߻)E_ʱRhfWWs1W EM]ȕZ>xk6\m,BH̔$gmUIIaF>FV%z5'"+M2oF"IF 5|OcRu aKi _"`  9oLL:ϐnh"s?y~rؔNy4ڜî);Q$w~pO֢i*3Ro=֯~Zǥ3qj{)Sq4^ %KriRtiZ'ӋO 'oIh!̣U̟Տ_Շzt[y)kT:> jۡ70o:gϞqj/1A<х[X?p[ޅ&.Ǣ(M q:hDC'D[;aA|!Ӹiv|c\xm/I|qU|/{> B՜e_eKK=Pà2~( $FA Ah<&(ѫjN~P%*a}fR=Yj3`f3ylZ\!Wg24ȥ3UyJ  XĆZ Yh+9QѧbIG'W6sSt0kD" nT֤ eJzM"] o{pt8&_7fy+Aaɏy4 o!2 }/7s_kφ`9~%'Ӄߟ>}`E\`qyc vJ)DZc<"HSRT;sFA: u_t<7Ŭ GI tDŽ ᝝/XWGsL3TxfO2L%B"W3J*<N=^{rDbiSLԤq"S Nu=(mN`j}5Bk&M"=aQ)Ӊ%TiP%Rp%8SB̽V̀BU O‰Jeb~3TH5ݔT# rF$n'⒰٬lFH+کkXp85 ũx1CSa]Im)cr Z+`YH@Z ѧ@Lv*[8  "dH!=r<%`Gt*4)TPTtefl]ufff Pi_nٌ"BUhV>l&FCZBK*82ʅdPReXƨ&8g)C-gXjIM&0x1G\ y)UQbHƂh41a%i4$7JsxI85U'H5=}F4lk93]8$ʂXDإ'RJjt[RbCuܭ4e|J-K/V&?%6x=4B 8TM'p~XIiSꭥ+EX %8prp~uݪBg]Okj `Dy8 vr9>8!M*C}PNˀ Ղ%F ~U2a S0 I*[9!XNqaZ/18D I"|;F0g܂h0b%ú&p^0@=NRKI :8A`+ [lQ jāCJjIk `&14ul` :EN O' 1/$jUJ SY" c6楐$ƀiBq%P2_8؜^9 i8t2(*C!UY@7@PpW)d} FJP xLf3RkNA/7- HHgBkJ8K@TyوrZ,8PY t}+إͿddn%MZ2Y١8O3OVTSͯ6CM pѯoM9!UL0 }<}RoҐXYrz nV̛zwj+-얢8(myko1qA];hjHY,t@SfAN3tULh7Γ<]6"֛i-z4k218kn}}M#uPl=f-/stpfDk3,8o6Zқzvb;.jvvF^$hh8uZξxkǃwr]8Eîv0LaNj*[cJuRjq .l2H>|>q줺 ⱇ _ܐ/B\¯l+/>]Q"ʄ¤x:y~gC5j τ37'UXs*uXaJQ*Ud'yy_Spt. Hhm+žY ALp}="W+"XHU3y)+:ulI\ohRhb5Rљ RF4c j g~:l? > l\P;j & jE@AP?\sAP'J |n]D! |6~_$  !Zryt#jiJԢzDE\ 5[ZPcE\ 5Nbh\\Y"jGRQSN^RNꄸ;A1;;6z^:Vo[Z i3?|Z8q/Zhkϙ9[}6%cEs؂ƽdh ՍA3L;%$KktKB`2=C7ӻݝ#X@b;?O͉KdKޣq-wϣ滋w>_=+~Gi~7>tA]ˮ^]?qr] ,K9.jZǚ{~Z%jVZ\^vԽɩ{gD\)zgNN_"X%@qG $aj 4^j(bǢpQD}V88SNYVr=[H1}:og2}I:h.Ba= F~- <0hOYu{Splb z4{Kx2!OzPcwƋd1_af%evS(ջow3,^?_&lt1 Yx7N{}*T_ËQ`ߏ/o9m efA^"M OB[xӄg#R nN/y9O~LC} C m1W- j,J<OF Vezc}*8N rTϳ,N#)${pbVkSߞ&$Hl Datk]CuvQٻ7ndWlrv%~9o`768`FH_/٭qd_4 c,~_X,O2ְ҇3)C+Q^>W9Tke`9uT© Smr.o8RB%d)SQejDJp@5<݃3 NH>z *z\a $F!g֡А;E_VFI_I,3|>9iiI^\/%&>ؼncHCwЎnm>7$M*Cͤg+)H[m$âj==gqTd S{VQ<:bt(A,ѣlu4D7vC)ǝ>X1Af DoAM, F8ƉL&H( 08qRfBB !9X1OP2+d{Ii 73!]Ԓ+|*yBUAj58h 82XLѮs:}ucmu.ypj5mݼ;jBhșNq zth$Ş/Ng|Β|7ߎ:n4UKXx` 1ww)F[gɨĔXiϺyw9@C\Et gX7 d/2B|1pR6*Yβ5*|ܸuu !g^:EOZ7Q/.F)ͩ]iQWBC\EtYOZ7{b:mTnݝӜJpgͻ-zb?ao@ceꇁFigF52_C\EtޟD>n>X7_ \TǷ*֭SڞQWBC\EtW,8n$'N\<ۨt*S^ͻ'hșNIz_A"zblRܭj5&TԻqr*SQ T/Pu kU)y4Yo>(B2TThsҏͭ})4?%l 7[7ʔ*1!hJF$$]th_9Y8GDya.튧FkuS\oٻŵZl!9v:9(:M M߿#OZA !uNjmDtCUKBHárJ/t|~p!%:QB!iq~G7/MG#-@Ou5B$1'm:.G.PDo( c8KV˜-y'&QX<:!bJ鏷vun1]r+BD'ħ%X%>պ&NB-Dc$邰?^WQrr1N08 C*\aHAl憢IZ.PU|cA3h-f ;~~Bz lVDsJE!+բ.KlxY%_j}1g\V˹wa!%?PX-Yѯ8j1o Jݐ{j3Fzѕz"#m-VN( ^%Rxóߟ !r(f:̯.KxLs}SCXkڔ6Ӥ34gQlN|y.2+GU)殈6qL]b~0 ;%8":KSAn"Ȅ`ݣwpocqBFۭTZިE[R&Ww]Y,W0gi_mMǥƱpZ]6Se>B?;T!@3zC Q IonV?QN YV`Fml%Mn潭Qط w&"w׺CkKn~hWocd~⺾M%o|[Nc;=T=dTyCH MG2Ϯgvyn>:d-Ji*SgiH)26}Sowc = fK *1aB#8m"kΏPJʥ% <3iq~c֏q-Qeo*,YNpt'nԛ}*XG'ZW{ODh;N݃<]ZlbB nsr_cX U><5z8":F`Dރ}8AABf㑧 4%1,kRHSêɦ?%+ ƒ$p@#@؄0,Qذv<6C !>lpSHy`9)i\cJ:oD5_ +]2tHSߧ,Dj?P))w_ #.ms0> kR:! >DXաz q4@ l:}xX{W I $ ( L" #&Jd8+X$@R}nA0iR;G#'J =RZg_cn '$BVmh7ˌIqQ~FTI{!^$EmGxܿ DZafjHJ:Pf s48L[vW4!Xv^)b`T"BMT |0L sa pB^AՂ9`g@TŽ) L0:GPvL6Xsiһ+N=j핺Gpi0$'9ѭ8wbRWu\N kv~@)^N c ;=a'vҍu8L6gA4f16m#Q-6?_?~r.('itauIlaUgsݺʬۮ̺ʬۮu j$MR@a c)2U1Q3sQD@QadJ3j$??or._[Cr{|~@pգ#5nIô9ڼWVfY~Ge֮*#@ +V+qr@=r/*Qb= vr8AХ Sƥ`#.V,/P PFPl߼�QSgqZ.N?fO^siJ^#@(ծ[!S vI Lq R(NDJURF_"ȞwU>\jf%^#~*^G=s]<=3P?AyWwyr(H~l=_b~]́C;llx};kតp;"&:= Jk yA^ЗXa1c=#V ZP}k8B'i x|j]\}dm zIT>$J0Y1p~&pk_֔G& |jݓl8 qoe*r(bkY^nZs+{$-Q=9ZWg#k)>&C|jB(Χ#;=hdlm٬tER9l^~T] I$4TSh?f;M2Q=fRf8B)@2IbE@YFؾmsܧdY,G 6' vonM'6;Ob>6/y< @˘5r#tʴp;hb8aQ& Ï.,u.{RRh3> }D8V~%Ԣ9bB)!AۃC}=Vgk^Hq句x]A"-|6*Z].?-k~W*|s*`vڮx wE1*,%'b5͖q'[gPxDX)'4b(Of_crmZQ wONۮQUt70O!'<:s%2M0g D0Qk[MPH?p SLV}K"JF,R FZ/yMSbEѭINT 'P$Y~.I+ a:l85T.h$h~+C!pt7Y MA! ZR PQ+Iy"u>O2h860>Ik´ގs c&nRA@@sbRBJ<{\ ͹֊n܅sQKIҀΗġ%k ͱAD-ɒ)h5YV`DD>PPD[ w8-I@ĜpԠ#QymSwᝀl_uTcs $0 MyalC6k(ti]\ ؓ_ ט"WhP=LPnhy+PNi%_3 Mb162*#{0f`d ]ŇM.lNzgQ\q?lV9S >g;kIlի]ޯ'5z8oFNe {.*"M"[-j+ؔ7FaU1S#"}sys3j~DNi{. ulMCiC}w0v=uBET/\YUVS2a_ҝϚ" jʎj8@oYǣ|&l0\,\|Ti/$2eEO0Ye AI$?W]jMoY*g4%}] N-y_'FhoH_.>6{ ҟse٧ndGx/=‡{#Zތx[. 0ΡFEa"D~uE)ȉqkoXrsGdQ)ddbsk9m&][JEO[A=88ߞP044וٷg C,iid 1 +6 ϘpQ  ñ"zCb<9S ¨3iU:9J(@1 cxJvG,; @Ƃp%4n. 5M$Q.jFy * 4<| ;"Fk =XԳ._lSV(*TUE8M a}#m[3j k}{k+peƭօZi =T?J*:Q#CT.պyrqmsfYZ.ζur0 `Β+Y,`.V#[2`$ʾ@h,wKƼjkkP':?<,Х]n%FP}y m6m 軬0 F>}"zph=ng#J'W VnZjU߹ɔ0'oAj7ը3v+Š蔾v+Z1v+ORՐ߹-S!1]9hݜ-7ϡvY[XƮ>9]p~Vͯ.>wfv࿫.ľ%VF s^ۏA2z<*\sJ7ggk?\ūM X}o"ٷ-x}W@ʼn0c^ h]ÊFP~ѡ8<4V8&{ODei0S9hr)a>Ի'_lNJμZ  ȏ~s!١W)"e8F?"jԋEm\2*̠$!zW3Et>5^A<{}o`1B47zgs OeK`&L?-VNtK4ICDCND'Dhd~)3^YKzG>̙ܠK{VO~<wA[gyBcsN8咃>?ծZ?3{;.mKb4Ia4I.oU;\JM&*z7c.?E{s@:sՙAF\1dsQKIVKEp^⌓7ZxJ&Y2BHX3Z\'b'`{!+}S*X4i{Mk<\|> oCjyuFWD߯hbqmZ.Fet2Ua&`?kڬP>'(vo@ˈafL _eCcrT:#/&**;r V o]{3zp.'JNCqbobF3np.0ZR!3Y`kmفw|6Ym0Y톗,5JՆU˅oP'ѳL@rp}~0޳ٌaFq~qrîK2GoYEȽn:r<ݣ^APݵtWpKA6XzqY Wİ][KLwޯ&Z0Qm5 &ZfI[%oeQO*VL%TJI@bA&R0")7t+|i4T28KjMZu<9Ѝnվ Cը8a +_e툡4_BV4:fS `՟AWx E8\uB"h%va1)_⑀ujP@$18Cȁd@6\Փ(Sjk$"4-Ft@z)H'E⽐!)gzI@jip\G{pq-t\m (O!Z NGr枃/1 ڑaXPHeR$(:}j!AhG}͝ %¡p G\"N\ CF][ֳ`l삣}eCQ9hňQ 0ij*HQsh1AZA+\2PI 樣ĺTcָ~M_SZ|EFڀj$tץRJ%.՚6vFVAXҳmUVA)}G6fsܷv+RՐ߹Tb%}tQ2O?.2O{xKimvv)1zxTag!ΐI3{.P11r 26"GfȦeqCh >= i!,2OJ&-b|ŵmmsg~.EuaSFV^xsymAjLu${6,/`ębw#3_faikk(*gHJjlUn) ۢW.&jqFqX-洚6g\.1M3`5RK!S*)V=w{׊xBKO=%l:H8SspW]};we!}@V=XJ"FX{FaK4uU%kaU1{)Sq˓YV'`C[@1G:|*‰B٘ۤyB9r7%A;/fڜ6p1 G7{#PW25|*.+3i[ c3FNŽS1[IqWOF*`U7.*D9:CDSCOZ$7rV筆Q!gO*ƀԓQ5T6:K?wGp;@ծR}&(T}L?~X Z"hpL*ɾ<]~Z?`AS.^}(5x)2ycPX-c)80;R|9J7bEcqh'Xr^\Ʋ_!SP)~r$n:,Řv#MW>]),ڗofVL* CM՛Ɉ(ޅ_4T.Yp_/"ur>qeraJi|N(&s|PL>iH+D#SZF``HƸR"h!Qu$,(k#{@ 6NݟEm3ԋ{ 9y$|$hEK쐒cZG\^2JRL1"`'7Bh5*L*#$|)1))DX+#iԌJ(!R9jGb/(A^,&C)$@Bx{19R[EAQ (k #cQϙ^@ELto®pjv>˔Fide~^; ˻f0%GXG㛛H+K0 _rrׇ ܴ ?4/?F]W y{}zzrqw')!(ݷ^sI*eYOx;ҜPF.VJm^t*@^[\ª_}){n\T)D$Y2O!ƴİffOhnmR9es*[{y03$tCIMFz1g*4|rY0T x0[t欑Z{,q3[#r8 ɝ2{8|!kNWn+^mR4Tٰy(6 sjSSlLX}f#L[1w`j߁&Q~"N TA- Gb̥߷ t_STD}#)8P8j`p$30LpNs'g Q L,Lp=ʃ ـ[sv5=Ԝ\͓/p8}7_KvB[n20!)l̬AXBC x)=u"B: 1L '>3K`VSX/)QR?O:!"daaȂ:@jpg 6x42G6X 2O1i%Lj|j&JLĞg##QTAR[bbG">b0f \\ {(51"v& >(' U;hN6,"fb$K=V;#AZ` 8p%V):(y:)@Wts5^s컜oM18㙹<SPa\16Sh0Ro2j[J弪Z۫7 cV}zv Z;(Ɩ_TtecsG[r $K>^vO!WH q^vȧ% m3ZBii~0s/]Emb;E5WuĪgyysi侎XP:cKrz%ĴS3d+@U%P6ǚ Rҳ@fӌ' hrޔvZ2e+Ob+t0 nl,"˼7 |c}HFهd}HFهQZl&:Dt|,]h裨 Gea&Wsm"9ʃ'|- teB:X]n{pU2j)< ?):NK*N{E 5kҰ86HDDxt ]ȃJ6t ؽ+G%TFHIёp~5?3 ȫɱwKQ.+{"Ƒ;4 Nx &;ADS8"8P'qk0`3:X1IXL"ކx)҄,1^JG2f8mXJGN )Jw|XvVޣs0[x?RXHWCi9AلAZ~N"UуzGڃBg;y+tYk{GL W/ŊK:nZT#&$U;1ݡ&L)\^JKRl)o]6y5$.E2uvS)Fn4wnJO'3[ hLiF8\o|~u٢z5|mEMWzMߔ3,Ъh{2J 1Vw;̯ ˢw;L}IjqY^.9i(í<2m)jfuԕ*=Rŗ͵Dzf׽ nɒ虙7ZR3kWo=:x{z|Qu_O,aGSvurjã{Y5NFn~z c켥J]J.5ٍn:gr նQ:9+#e̮sMH giH*FkRaoN4ZdTo0`!zIz&wY7 D0)`9(,9\]Ηq|z~y 5A>?痯1:"XM_i`N`)O]<|p;mt0@`+Lz_y,H1q**gvFF0)[_?zL(j:qY=U>N/ ["c)RZ1@oyz'IP>@o_4m)iw<Л-Y޼ZU"zG3SQ߆ ba w@KvwyZUn>QDDڭy!Z_sVh} kD50C 2o{')F+f F 62͌ƒQvYp;Ԡю Ȁb#6ylJz8A" !)Jh8(&:n VJj{A r)DWƁ"(8Dd\h4⠔'RmC hAF"DD4v ᣳtp됮,u,YVrk FUo4vDj똋bj㦐rL)y9T 14E*=r_eHi=`ZILkˢw;Kr(&ˏf+۬h^fzx/>a5{T̎ e6T~ e{l]M7,;/7Q+?QUdO~tm'#6 ʢ$bSVIaͦ,tL\ i,h~c`xP+mȌcI3ԏ&J9٧:(qX;u]\QgI"7rDP/(TC(VA$ uDke4r!kI>45~$5iS<;݋ =CU9phhϧ&˺͐y,xi&I*OKo+XQPŐڋ`hfP&w@uŴiRyŚ/`nokj%RFRˌH>k=v_TS)D^eErSЯ&x("υ!iB@&HTh <qƁ @͛p:fÃ<?]кatx779x=[r X ޺"M ]r?w,g`L b|ŒZt1i1'F&2|B{d){ fl y]dV%u[FB:kScM[DD-e,iySP][E9*A}_̾B9;,:CgJflyňͤ2TS*D#՚38l};0lɵUa֔mS7Lf'[ .N+ϗ>sf3n aĺS;zz ʭlOoQ>9i*$떫\̚+7;oM/g2k?7 ݖj#4;LĝWOlxA,RI-rs)DX\y w nfi4FQX *g5C^ C>g*B 6:9[HݨAF $p>g3Lj#n>| 'ZIQYtL,vx`vȽVcM݇ _^/T[2{~)rffԸp< QsOUj@av7 }Þ:(vM0,="BDnT>b|uX=ތۋКճ90);Pd;{\5j[[2d4a"\v}`|z F|2)o`'Տ$U{=#R}WN*&"d$ "jC(QYkpvk+Qtدu& V+xɐB"%%9u!0Оtk7: sq4xhR 694iztF'ThR&;y+mFg]sן$Msx _5fJ ܂XAÏ~;L Lxvްlhnx⨾8q!Q > /P a`(`cWsסNHLw|5qw*ƿ3(]S'2Ɣ!wrV8WAMާ s|+_M%ZO-6}-ƾOjf? ]ena(0qH͒|n=:1qBR4vsqW$6Ÿ +a %dte0@8I43*ἬðIc>4(A]&DN>'Pd“TM")Ԝ|J4sfYZxm3ٽ !W_Օ2o- Ě(3A(;G2\ Vޙ~MX> U"1@$!`D"hK4E30_W)\A`WE@|TϡCjMw/?l?_ߩMͱ_{5Y͇}u`jGEkȶ.}g,"N-* ;v*C6vF/Wß~F26讔Nnˤ@9KY{ 'R!%pOuU5;SapZ x#"&j]wӒiq%(#JtP Pk! 3*Vc0bh7!T%<zuҨsNmsػ߭foϵ_ @⩷_2O]0 QjX%kŤD.ֽ|[z46VyvbAZY19HZ+>̆ئ oyF]zV/f6o3H+޶Mjɚ7"?1wjwi}lm=.^4\V[wbm>m̀BuBU[be`rt3 .ݣs>~K:"U FzϜtIIwm&A@8|gq>Z kNtQH9-q:94Ooݭ]w36αBIM`c\悔Zb*ᠩL}x H*BP'KMj1HhB* qiQ^F $3G  2f.v|0 ޼:h܎NHOH8JwL$ R`;& 9Pha~ хq| - 9Rg0 x[݁0&. oEOMhJ?O <5 L]4C#wYVrq !cTe*ݱI!mRzh7i,uq4$۝)MyonF 86 MOuYu4Ƙ4 w7P7Zl2 8:C̔dtK$؇4p|)C.y󍆔m7`ZLjJS)8/ dvqn:ĄiH'al6]wF:Fֿ_ݚsֈuF0^j'߽YKSԆZH'.Y2xj3U+@I"'r1H1h ZTRJFnMn%$2oK_SfHssE2w R%"L(LүRΔ1)lئ5?/05e;L`ƋOqiHcLxŵSHmQ&(! Oi/3E\W/dL(B'5dVD]+DH(49(](vQ8$QT jEpo{6qr4F||s'E %Ai2ֽIޤ5qo &HB=&Qjp~nޘIGׅa1_$'֣38'#HPqbԍe0"l :o /T 61LNH+숫Wޱ [I UR`6/@H @sJ=0\x&;Vݐ0$zC,oŢg~+!9<%/ZqͰb2(8cX 8S(Ist<n$?9 b.IF"R ǝH `6\ $e;!k!K+C 9=0sFN%l'ҍ\ uBg.uPx K~tmfnmE16c#uߺ|Ѕa.8&O&|."E޿_߅~wz -|!L=}Ǡ\v_^Yi\02?4,$hԺ̟6 ׽ז75cDMi4p^{aؾXMrܼXÔ8|A=7(^Qe6{xa][[C*!p@TwͽWH0:~r`1fA=2LLF;K!n;ͪeUJK懗V ]>n)nqTܥgI¼m))xn ~S I#kb lnu4?iwP_.kK^JzI,F8RяOh{Af]9mp?D ?>W۪|Q4/΅*r0(@wbi 2J 1/OrJ&CI,<@4FEWI|rF)T: D]*?*[NJH'.Y2ŦjvA햋AE]^ q-SRR52VxR]%`L2ꖊO{r  ӌp ,bs䍈c{Di YH; :C[O<Q$9t,/%&Duԛ5p9>crt"`*ƅƜcSYHjWB)]ytӀx6ݶ$I2F8 G0>ٻ9n$Wg|07c{|JZrdߗU"Y`%/dD06s{-x6ZHϊZD]}h)_MrI_Hh$ B;BI#Ц3NGBHήb{W@`=j ,1 Bufd0k`hnHr圔gy&.*,WU_*B+3 &KR׷-dP,]OiB>xGwnjݛA?ڔLWuޤO{ӟol-OWNd #qƵO0Н-1  J1CHQfso7k.HQ>\ RT'>"HIW3ӥ `f3>@/.W+k{_miZtS?To(zC UkZ79"Ge͹ ݧOfn]dܰY"Dn}H4 %as0]U2`e vHgZEˢ˶<2#3Qq#w2Xփ(K~}jВ4lXZuj%86tkМETV%sH$CdB-X>i\~ߍ|QP2QP6QP9+bXޝKRss " =fS/-M~<+u\Bcج>xoz*(%wF@[cdȘ4B@m pPU875rVB:{g:j .t?|~Df4ΦgŹlfriSg58R6.z5 ԡlZA:ѭ =i7( %`<X,Y!mϊT\gHş֙(dSϽFnoMʂ 15gYT"YT}RAkIV1LPRE YV%Vs0iP/q_<fz%VzgtO &Q%3Bx9e; $cg|UA4YӦ3/aŽzҹ|0fJ'X@b4v۴t'rg#[Qc:&ۚr麎tžC2F_XΕ:WZ!.;<8Ot'3N ;ϫvNv!l8sT\U 9d3+3R}Z>sԌg @)P Y52 3{H ?Y qh.[Qg73e(eaDU We[Z1(-UW+~wmCBEA$UQ. =}\/ǘ7*(7{1Q\40֧r9L^ydc}vZY-dJ1WO9GAq&ڕ]~R^ſĜ~1`aV] íK/]p0uwmd2!h?e}󗻙pMpr]T7 |9?]_?3ө7 0s5۴Twolr=x+xUm2 6O[[2xL>S#*3򭲧p{ݯHm\YewKv$}{sa%I&V,a2ٚ(3F]%L`,s~Qn=8NgбUbϐY9*F-g`wgEW?C1 x Ϫ>EY[(3dR_ԯ8%c=~ݟ{VY8P"$.Wb@b+Vhoi+]Ƴ!F~` (,i*)#X%ˆ0mn+n]-jtr%olr5RjY*ZD ^`jÐs%5V^mKT\ϫbWn<\)Q礎PfQ!K0B_X 1%I ,́[ZU 6'-6c6D6)ED$Je6o0`CY2<W/ٺzus(gQ=3Y 0 LSoby?>X1%lXBVRiTLj!-|`_t.Ձ#.U~eQ4f(׿3dA@x:9 ismUka~J 2Gs{WO+1̔~7wU*Y`ps&7_xૅ܊>E'olrմHZ!'B*?-E5OieC߶=Z1v2%2ܜ;f ;z[@\G`=-VG`+34F.c "Ws*X{r DHLM<%eǻ_J(~7?L!vB]ԝEP&9Qe&zm%9 AJP}Hb},,XlH:LKEL{qb9#oi./[ΠōWS Ov- G[.Zbf vԹ uKz(*Y59ק*$9~9ٞRH::&—]485ET<ȅC'R6jaV8;3֪9%1 Ĥʨ1 %ΝLi=Y:Ӊx[DT8Cf$G AsI&dYa! IXb^ig>Ebi4>'od:-A}ufO;}u"&Oѕz.?Ek;^Y QuD;+Hx /E4"$BsUt 8J׊Cè;i#,'L nM"Ol, (Q[T9y!YvRH=*=%?q ##d$܊+-*ڨAC 6mŠb7j^RftSv).M2EN7'M^503lc\E nקfmL 8 ff;ёN@EH*<@#CmcA(Fݗo/{%y79e+ G6,'58}NbS44Nj9;fqRSidkR^t_|łK/*Ԋ%rhcK|K~=lS^4j (KƦ!'R " YOך.źn^*{qZEKu2׵2G`Sڂp5 87z<|ÌMYƩ*(%3UA"^L|i|ډWym$> YXFt7GMV&dG\_zTb1=1˷$8fvk)8BWH/:9cR2YMF@2IsxCY-q TʉDhZ,`<;lP. Sǀ "7Q(_29)M} UywY;Y5?:H^_L(m:^e+fd @"A'KkC$arwool%:c79`eßUBXD)(EKv˘% 3V R>}gLŠ:~W~7N9'HDI@z81ԁCthqIJr8h W0w[tY/dIx4ѓȦ1G;XGLE RcJdn!f6BSQbmU ~T$sTR $٫-=PO;A#D;A{Ab|*ɷwv$CK)+4zc-/3|]oy,!!":[X I&7ӞAJê?:4 9ݶ]e(a`]5V-5;T >qezIĄ@QfY R!\YD5M[REJ:9]wK+#NWeǩɵ ZY:܌x/ 9!ԐaV<`(d)sA4w3v( l\w%#IDFR|IF8hdQ 5Q#tWhfQ.EJu9zX+WE*MAYB2pB%=l5.GX=#2Ot`g ]sQk,mEV;:?+uS.};܍sYfږJ#EK=H0j'(?0Sdcf[*!Q'HR d\`r*ӰOd>^"qb-Suѝ,bQSh>:HCd> <ރIlubjwlS?$VwKHΖ)2RWMix;nVLynŴ߭؊4ZE4LF<|eʧC&=Ic#S6/=IZKog_i:S(wG맯ig'Gk0M[^z(F@H?>Sǧ NyC#߸5tC^T 3dC~A~Γ7o'm> \]^w:{?HaE\+m~f4dҽ~yө8ZyCydV4m)2ˤzprlM|pOwÛG|jg5wx1qQRZ2ʊ7  zQNe`"$g޼5+uBcŞQ(K(eʢxTT ^&{u#oa_8{9sL60e(͠ j!b (܅+!|Fǁ 0kL]@(%ٞC],UgƳ> Z4]dO+g^Xg6dYϠax`ls{gC"ԎYb%{FaK9mn)K05KΔC}Ş@^P!c&t>@@˳6㠼3lqQVgۺXџ,A޿$K|qҩ(H;:VQKq'ZGyX0]"*sB֙~QQ,>7'}n_o`1//YbdEj8v4o|{n mh'nF Y n,^s#ҍ!ek88Xs)YL8LvA:\gG_]k 6?e~Ar<7k bsN7ڸЍ:^PMWv{ơln'8 >]L$od߷6Eaq#_cK^?ՏgB}"&ddakK5sr>2Yk](hLd7$'mԔr[uYB+{3A$-&NсxEHl=DTGv[w6g?biz<_nFRwq]0sQȊϬ߾y y&т[77&|g ]M!yy^ֿɛޜxصE௣Q=9X]MJ;獼ћVc;gg qܝ8Vh .gGoIS!}Bkh>y,a=gGn#t+n|װaJ?-z,W37Ju*)Ŗ(H ZF#P-ڶSԨd*C 2:HZo4h~"_ AЌRUgf34Tlʵj#_! :; aՓjR5GTt%gJ T6TW"3jZeiהTJj 9*5 YAgC$h]`S"gb^Vaj\)i򪰀iI$f)w@ͤ~53˺*p㜪:ȁcHH1|Mvi>佘9M΀D|)" Ykj?3¯dpPWsz+(EȜ(Ƞ*!00DXA3mpJV*КyPHNTِ(8A/V6{x1;SjJ ـ2mJ]$pO-DPY[cB=17L@ZBtU* g-=&7] 6:&y*@> dP "(CQChtȋB.FX;fʵOBסj*חZ+.ÌsNsb\e܄l1!zi=kM@К!@ANvpP4*nj$Wl;fJHm؁|R2; x$L ) ajS`>JQoK{ Ҹ.Miն 0QgUm/-@a@t5 fayɔ%Q""<0Arz$Q" j2QcQatzϻ ]%DE aYc1CZaP;?T 1h m  >(NF &d8 hTv SD)ij&4UBk. ,Eh.@ gB),FFyfG$jQ*eQԆ$p{ 6A-$Ԝa"t!r|90l#_0aY /NNjwMUHj?Zl jy.3;g!Wa-FTk] ! Id6`FGJ38%8kbhbdI> dfKq@+H#at(GăRACDdŤ)mrt﫻G.[Kft໣#IHA ;ȵc#%IJa^}SN8|Mσ I=3.2 ;o!ҍ}|s̱#ޝN3ap _6kbρ]~G]_Tf*3vE?][7_~8‡9|ni>|hDOxOҽ‰C'7L궓 vN)80˚>֌+f}eQfᢍuh.E1#rj X)" #/u&K \/Xd(r`Z#O&̸4~sX4 DB)ot۹ñ@Nqn=siĻtP1 ut'$p\|̤n8*#e90JG1?qT\#GU[ִl!vlhSϳNϮhҜƩLeyIU|>4rIh[fpE}zTS읎hK1]aFzTN?6UbmdO iv:)}yRTޏz~69 w:_ra:ˑ軟?Κ=JvtF}e_7Nr% ARkG4M&jʅ0}OStT"MDޔJ4-vU@VY,h}%%1*Dv"w,?P=="?I W؟ꦇaN `&%|.y>ܢFֲq]6xx .xlP8]`gM9Nhs\έNww{D/1[Xw-eEI! vahg,$g۫I8NjgS\sLe5}wvǣN?>ワG(ݝ\ӓ}eQ-zZ\NSʰ5c0?6E6KZ ^.9@!Ύl9{WO=ۘf~b>@Z} di:l$aYPkz/j Xzb _B嶱aٍ^Gwow^D"O&TsVF5gc&T;=Duy!|lX>^hlsЯsX$3SsvZu\ o/>ՇW^d89* jEYoN0=?U蓛\ƫ[g5MS5_.&T XEo+dFU"欝=|~+D#QM;a Zk*c L P}0Cmn([_ˆZʾ( cbhz0`vփκz0OnQ҄]A=XU,q@5)m~wrLuAF# izbP,DJc4c &-٣BRF$-syaj拣{磓\h830k*ܑ(Ӱq4Ԍ-B5II1kos0ϩbr!Œ.)8GGۼafn4T|)wy|词6O9ߧx(9');ɾ_n"PYc;-_[з1|rWbi2%o/vDΖ\v\Wj[_9ۥ>mxL4B6"7?#ei1%ĴmNn*1F duDxݦq/ۦ Äm!0{~"T1PLVgЀYJoXR6 |<ԎW ,grk5 ,"Жdn󗓋E*z{W]%u(Tcm\|91&h^XWi ġej+{Q5CØܹBJVMӊ7azo,,-l\a3^h$kGxZ:E+Qk p޸QV A^`9v9P-,Rt{ d+Ob֭۽ \?G o`!;6jr.0<9"o~)YW}CEޤBLr 1)ĤBӣʄS(eȟ4*H0FQN(s+Pc6~H3g EF0 G#j92B/X*4eE `$9\ً$SuO*De$i"ն0Na4!)Zp )OZo܂ќqkNEz ªS6h6\ىr&ag sT҈a@M4 l0h'l.ϽӴ; :EcY:khPx ؘ|fz[~_}$,o+S(DcNfl#hB%ZMuI+•)V=WWZXeԊ }V/j[YTN>8wQh6vn!,͟C=1.RӮxu/-=q-.'c `%2>9w2(gOnuh|_pVnD֛5[w"[#͠BmnQt"6_S5'zS%#<@ ;W{)9I]]vyOoGc+UF/ ֱJ{ wC*icJ#&jv`fwѻU-xLiul/ʜMG%2C{* Ѩ]%\v4\i\ ŖeGPћ=vFkEe9kֆ XZ9֟Ϗ=1j|é &WX`i՗u})6>u+kˆr+^z{;Lkk~26 c~½HsWn *EKoOTMpTСu.?IP-sy+kĨeHJI~z:ˋ莳ΦJi6r,yLĎ%W$\.\t.< ovWޱܘ4NRx^h㯲p*9+yּN(ڪgbZPZU)mU)Yx+U`~g`,(&p7l#}AIcuYӑor%zqQfm++Ӕq4=?׹Ϫ*bC#׫0cSrAç eRf Aпl4Y~j𔊲ȪX3VB7J) `:jV- |Z%oY@jCA&6{]inaW9ۻ]rUޒ[G5)-oxsՄ0޼k@xBґ.Q- +u0ysAtLHdkm6L]Z /pqI(vr2a|\DdkV7^ QS<$Fj@Z`E]E RCkxMoCz^HSP'ZIU`*0Tsg`%3khE:UΨ"=B+c%02r4Ltݐ/[I2i?h.XAqm =rng8P3{zQV1.1NWQN1N=q S~X \[s+GGvRAuf8B >MgXgyZы6t,B^0 +|3dӳY;:jtypHڗ^Y_;/\C|L;RMubQ B0`^ZB.ip#aV$0Nr1fRap]{.@־ b,ޢ 'Q˂0A\z^a_cqzu(kLZ6Rb(SuUrAZ˯9#V~'s_ Wd(ٍ䧨jmyw;X\W4Xi%d4k>70UF^PQD+kPjpA17vvM {sKƬ_v3m]IPp+h>sQ֮і[?߭tk Uuz|vqb" 2-*&5 pkmj{YpmEi뙟+*PW8 ̶A37m*- ˜q̰a0݁Y|-rEs~5̑+|%%v٢ Y)hvfZqDsPA4B*'}52tbx3T`n3;+fU 0Ąuny-)-=6bUȪ6)8\2B hpԕډH췽}둅\箣լTYLB[X@0Q&A(1ZB4uBZ/JSm9M|5TLH/R$iPRNFC`9R㕁ČM}`E-C[.wemz;T=&LrmdDm-sLLL, SݔԤ(nMHd/gZkTHePC%1͉ \MZ7Z0Fw\\`P0AKgf5v4a#\a/Ն5ZIvZ #1#k$9 q O1FAP;ݺԉ}Q99Iiq HQR DFJ:S LJRx"^f3;祾z$'F)Vi(Ū[Ż社Z2yǍRPJx?d;e8M.2@.J) @3#Je!8w H:+qƚN0i`Y.Y#NNsb,4b6ϮS(/$Onk6,cmTvLz!YSפR>Rm{&zGxcL$ME9TI>堣A(%}%OPwW_L 5V5b1oYzN+gAh#LIL8ݚ :M߱Mt;dŢ[B8DwV2tR֔_;SQ O|/|[+?ׇ~|* no3?,5v\`/Gξ;b٫i#K褋S+1Z!gʣp-mIwn}Ayov)⁀y_4lCqVvEj%-͒Oy_"#Ao8+QލŴ7)ʟc[^؋Xr>>WWI3w)XɕFZ:K\K/fzc\KMr!@)i(9 LJRyt JuߘyI]Pz(%8 )J NCi)Xz(Mj%"h7Da2VL0׈pL4h:ry f֥RI'VyȥTRtj[GݶXb?vy֥fJ*nbR,1>Jj]jF7}(%: DW1t'PJtJKtO(=jT>jv\9w s6"ĕs@\YƭNIPԬLum1ֶJӈݶVPrEubfB%,f%u%c'|K}.5#ās(M Cui8K9'F)IDAo޿9H'iPSϋz.*2w`Yؽ< 5+@(3d)P@Ye0d"He3hܢ E>5{QM)[G)|ǟ ͆0N N h);Q:Tk ]_/#chzx_+-o*7'm 0Op ׸:㓦GUS7<h$Mynu@ []2>;I/VC'6ϞiL& @L/@ͿLWX?W@hBjhf1g~`L|De,j춫QU7۱avHV*o=R!\v|Ļ"_4k73>C8OzwˇlEIRGfB=#3q$3Yx G7M@wn.}(ւqtݧI]:CI1@#Z1؄s 8: 8^"$H4^`#9b` 9'^@RtSIy]8jTo:VQ6Yx A`BYo~fTskBQ_cL` ~5~~0B\w1igp =D-ƨ˭qqmc͡'QhQէ*M}KE[a s3lv;0z~@2(Aˌ6<A\8x@c 42ESLuJ$bXS4/쳚eB؁j9q d(ZX@nŻݦzQ ! R ]s4\`hg*h#Zז^Z<9ǯN-[k\s%k QUKJRFiE_]APj-`(J%늻ҥOJ+JDGF״9Z)cCu}T-6a%{T`?M&Mq^U*fET 'G ɘ:LJr̐7?p:[{wO3,ʜ;+M捶\eIIג`IL~ŝMK_‘ ozL QUZ5:S2 Q :0mS,g`>G%@]tu9RY4abW·Nf neJ, ]|:2. s x i(4{K9\8X{?O-L3(DL9 됟Ǎ9M A%}71#7t1jlPd>k~_ys[( }@O/Lo6 |2oH `gh\'<6,7>Ís;[$FD}5ta8E0L[[-#1ǎt}1]CQ.}<5ms;FpNjLX3f 1噥)-5i߻L+$;DE*[jAnKms-+ aތ N҈Q^Fڦ_m0nn>?_t>82DBX ʝ2XIڋDEQ!tP(dV>ʄcRQg3e,_+eT[x tuA[܎ƭ(ykrY$@iꪲ-' 8Rه\KM"r->3+i.!Xr.-X Ui𫦘WMzpz֯2QU٧sQUSRGirWzюxhx0-&:yF`H~W&̦`}7#l+)6H(B" <6{}ϧ8R|V矂?zvC3ݴzEx!HY|1 :M`9 p:y+ڏz1Ym<389J7>y)a+wK ƴ_^ך0U2 ]I D#svV^5ҒѐI4g3z—ćKVxRƎ &NZVԺ|?HyudžtF.JSHKDٶF5Y,$/,e>}m]a. & eW(sX2RpYz<[_jPw0*ӞU},U$r ObRWe/HL;>)z{sZJA):AyČ61«"{VJ~F7fOcݞYJ1rLk 8)Oofރ*04aݺxbv_@|M>5r^o3Z{PjmtK_ܗdxΑhבqJ8 z>TD:btt/}>eѯˆll&-8g^9uZ9tPy*+ߍioRL??Gbr^dUg;|Idu}{j4}ԇM't*ƟꑰB'YX\S԰FKȽ2*ۛ }A7\<m$ +o*^W92Һ6!C9ŭiZm .{#Bbջ{fJ)kW?K~OOi}>7ʃYC1GM 16Z I#tMS37[sAklh 'n*ql:^!̌Z$N=??%=|%GіÃ1ypϒܤ{zv-c&94?7?#Тӟ6 qU^aR;1gM 3%N75u< 0"`DjX;c$Fu#r,=#,5H"aNf0` FjDA('xXï5N}7hDN-rJF- JQSRpD%BRIhH%`PɘCK!Kr2xV=DL^ɋMFvrtDo~.vF]:ucemdY u'> "y>նJRБV 3KR2UJ?̫3y?%juvSnc]Y .Ol&Bgh$+ƀ<}(( 'fT1w$SNM\cѓVR{Pj5ZR.LTrmU᥌וm|`H%56po&;wqNY 바ZKYS4ipCQ[idhNofC` i@̾DՆ-dElmv+5d]CW^9G6uJ O[_ -_<,ٷ"L7)ԛS;SEΓ芕l ITPj&kQ`Ή$=q=#w{'AV=CL78jiԉ=[>G0l 0A$ J0YBGϓul=RRCR⦌68,=0#GO68o#SL ;Rr6Fҳ?R$-+T40bIy^\.~b)]O0!tؠZ^Iƞ5>lZ)?!Ur6~k^|j^IA\:JBL*ڒ Rxf'\m:DjJ. dF=qȆЪ /`6jrvỪdvӳf]*KZV)dOK;yå&JKROЀ>)x k{Rp}A ׺/= b_.1p{"*i>~vŰ0`?o5FDA_\z.~{SvXLaoP;〮met~_5T[[$cl]3-}c:NQ}t澣HMĦo6VA~Fw[pC&OˣMȦ H;n VA~Fw[0֠:DS{D6sٔJ)B (вPN W&ȌiyݗcSp16 J'^o$ 8k¼j_o@S:q Kl$v|Hzf¡JSPJ^&U KRJqD-lj9Fϕε (e!tkH鬖v]*If= [fQ!πkk|Kl[Y[/fQ4tǔOx| *#WMcce5'XtgOTt3ނF~ZȬޣ :0v&o*S7aᵐ[g?)[KRur\}Ky:|YV)z $k6Z(f{6`bzkec@LJkaQ ɡsZ٢f_hʃ@ODsZe=7IؽsSo%ZqО#T" 85 VAV䡶€]E8jVQ-M[0DFKѽ iJNԲHuu 5K(l]J0Ʃִ%/,xˠhH4  6f#hdï6`͢U^ EMZ"GZ$4{eVW %0*#:kDW/{ 6*8 Q+! jdzpd >&36tE^*EJ #DM!#ٍ"X2c ї*` {֗q7ymp]t:k1 Lu5hldWSN_N݋ wtW//?]/]AB q@jvT{iUZ)?WU9k>a֝^ M{H:DpWi+Dó^h 6r>N;eZ<.HD/9e$ͿB8KPxrJ+}D& V.&O vtYܓa^ذ<Pa]F<\jTwg6’P 'Vve\\1\jiZ'AYsV?K x)J1Hi/^j)vJ(%Ra!Dlx+w3vNgxw_ WMKMȦP~|P(VA~Fw[0 :5yD6sT= e zd֫KW)o8zvn|nA#(&`z'{N '^SL3Ϡרr'm,}j1!YJ-/d)%lx ¡J\zS-0OF%4Q^-cS}ч}#L[|zs+_z"ۯt`᧯.K AöX:K2dp/@l\lI#>58li =_L!8QXHx> yq-i5J,-i_9)Gx a<8f=xQuHXHrPlUnМ =J˵拠5b}D}e'Pk9,QP.p̴>ȵPrF c4Q<#Nfu>ǜh,R5_`K$QK_5K诟O]9\0l?C`.zo>7?[s0wIWF[.]uusq>_?q|K,گ_b2bsm/[% NOTr#Q^Rs"Z[{i%Jëjs 0U`D7yR5=]D]ȍdp q84T"u.u20Kش\ KX KtEcK>Ҫ!mCI*4ŀvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000004650244215130377412017707 0ustar rootrootJan 10 06:46:10 crc systemd[1]: Starting Kubernetes Kubelet... Jan 10 06:46:10 crc restorecon[4718]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:10 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:11 crc restorecon[4718]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 10 06:46:11 crc restorecon[4718]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 10 06:46:11 crc kubenswrapper[4810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 10 06:46:11 crc kubenswrapper[4810]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 10 06:46:11 crc kubenswrapper[4810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 10 06:46:11 crc kubenswrapper[4810]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 10 06:46:11 crc kubenswrapper[4810]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 10 06:46:11 crc kubenswrapper[4810]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.510661 4810 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513528 4810 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513545 4810 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513550 4810 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513554 4810 feature_gate.go:330] unrecognized feature gate: Example Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513558 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513562 4810 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513565 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513570 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513573 4810 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513577 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513582 4810 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513588 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513593 4810 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513597 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513602 4810 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513606 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513610 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513618 4810 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513621 4810 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513625 4810 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513629 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513632 4810 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513637 4810 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513642 4810 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513646 4810 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513649 4810 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513653 4810 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513656 4810 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513660 4810 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513664 4810 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513669 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513673 4810 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513677 4810 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513680 4810 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513685 4810 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513690 4810 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513695 4810 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513699 4810 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513703 4810 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513708 4810 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513712 4810 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513717 4810 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513720 4810 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513724 4810 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513728 4810 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513733 4810 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513737 4810 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513741 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513746 4810 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513749 4810 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513753 4810 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513756 4810 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513760 4810 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513764 4810 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513767 4810 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513771 4810 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513774 4810 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513778 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513782 4810 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513786 4810 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513790 4810 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513793 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513797 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513800 4810 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513803 4810 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513807 4810 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513810 4810 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513815 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513819 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513822 4810 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.513826 4810 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513896 4810 flags.go:64] FLAG: --address="0.0.0.0" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513905 4810 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513912 4810 flags.go:64] FLAG: --anonymous-auth="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513917 4810 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513922 4810 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513926 4810 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513933 4810 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513937 4810 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513942 4810 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513946 4810 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513951 4810 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513955 4810 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513959 4810 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513963 4810 flags.go:64] FLAG: --cgroup-root="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513967 4810 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513971 4810 flags.go:64] FLAG: --client-ca-file="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513976 4810 flags.go:64] FLAG: --cloud-config="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513980 4810 flags.go:64] FLAG: --cloud-provider="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513984 4810 flags.go:64] FLAG: --cluster-dns="[]" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513989 4810 flags.go:64] FLAG: --cluster-domain="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513993 4810 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.513997 4810 flags.go:64] FLAG: --config-dir="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514001 4810 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514006 4810 flags.go:64] FLAG: --container-log-max-files="5" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514011 4810 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514015 4810 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514020 4810 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514025 4810 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514029 4810 flags.go:64] FLAG: --contention-profiling="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514033 4810 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514037 4810 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514041 4810 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514045 4810 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514052 4810 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514056 4810 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514060 4810 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514064 4810 flags.go:64] FLAG: --enable-load-reader="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514068 4810 flags.go:64] FLAG: --enable-server="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514072 4810 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514078 4810 flags.go:64] FLAG: --event-burst="100" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514082 4810 flags.go:64] FLAG: --event-qps="50" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514086 4810 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514091 4810 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514095 4810 flags.go:64] FLAG: --eviction-hard="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514099 4810 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514103 4810 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514107 4810 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514111 4810 flags.go:64] FLAG: --eviction-soft="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514115 4810 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514119 4810 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514123 4810 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514127 4810 flags.go:64] FLAG: --experimental-mounter-path="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514131 4810 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514135 4810 flags.go:64] FLAG: --fail-swap-on="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514139 4810 flags.go:64] FLAG: --feature-gates="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514144 4810 flags.go:64] FLAG: --file-check-frequency="20s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514148 4810 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514152 4810 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514156 4810 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514160 4810 flags.go:64] FLAG: --healthz-port="10248" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514164 4810 flags.go:64] FLAG: --help="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514168 4810 flags.go:64] FLAG: --hostname-override="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514172 4810 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514176 4810 flags.go:64] FLAG: --http-check-frequency="20s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514180 4810 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514184 4810 flags.go:64] FLAG: --image-credential-provider-config="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514201 4810 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514206 4810 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514210 4810 flags.go:64] FLAG: --image-service-endpoint="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514215 4810 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514220 4810 flags.go:64] FLAG: --kube-api-burst="100" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514224 4810 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514228 4810 flags.go:64] FLAG: --kube-api-qps="50" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514234 4810 flags.go:64] FLAG: --kube-reserved="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514238 4810 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514242 4810 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514246 4810 flags.go:64] FLAG: --kubelet-cgroups="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514250 4810 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514255 4810 flags.go:64] FLAG: --lock-file="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514259 4810 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514263 4810 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514267 4810 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514276 4810 flags.go:64] FLAG: --log-json-split-stream="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514280 4810 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514285 4810 flags.go:64] FLAG: --log-text-split-stream="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514289 4810 flags.go:64] FLAG: --logging-format="text" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514293 4810 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514297 4810 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514301 4810 flags.go:64] FLAG: --manifest-url="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514306 4810 flags.go:64] FLAG: --manifest-url-header="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514311 4810 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514315 4810 flags.go:64] FLAG: --max-open-files="1000000" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514320 4810 flags.go:64] FLAG: --max-pods="110" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514325 4810 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514329 4810 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514333 4810 flags.go:64] FLAG: --memory-manager-policy="None" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514337 4810 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514341 4810 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514345 4810 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514349 4810 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514358 4810 flags.go:64] FLAG: --node-status-max-images="50" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514362 4810 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514367 4810 flags.go:64] FLAG: --oom-score-adj="-999" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514371 4810 flags.go:64] FLAG: --pod-cidr="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514375 4810 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514382 4810 flags.go:64] FLAG: --pod-manifest-path="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514385 4810 flags.go:64] FLAG: --pod-max-pids="-1" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514390 4810 flags.go:64] FLAG: --pods-per-core="0" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514393 4810 flags.go:64] FLAG: --port="10250" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514398 4810 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514402 4810 flags.go:64] FLAG: --provider-id="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514406 4810 flags.go:64] FLAG: --qos-reserved="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514410 4810 flags.go:64] FLAG: --read-only-port="10255" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514414 4810 flags.go:64] FLAG: --register-node="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514418 4810 flags.go:64] FLAG: --register-schedulable="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514422 4810 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514429 4810 flags.go:64] FLAG: --registry-burst="10" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514433 4810 flags.go:64] FLAG: --registry-qps="5" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514437 4810 flags.go:64] FLAG: --reserved-cpus="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514441 4810 flags.go:64] FLAG: --reserved-memory="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514446 4810 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514450 4810 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514454 4810 flags.go:64] FLAG: --rotate-certificates="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514458 4810 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514463 4810 flags.go:64] FLAG: --runonce="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514467 4810 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514471 4810 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514475 4810 flags.go:64] FLAG: --seccomp-default="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514479 4810 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514483 4810 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514487 4810 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514492 4810 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514496 4810 flags.go:64] FLAG: --storage-driver-password="root" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514499 4810 flags.go:64] FLAG: --storage-driver-secure="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514503 4810 flags.go:64] FLAG: --storage-driver-table="stats" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514507 4810 flags.go:64] FLAG: --storage-driver-user="root" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514512 4810 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514516 4810 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514520 4810 flags.go:64] FLAG: --system-cgroups="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514524 4810 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514530 4810 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514534 4810 flags.go:64] FLAG: --tls-cert-file="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514538 4810 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514543 4810 flags.go:64] FLAG: --tls-min-version="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514547 4810 flags.go:64] FLAG: --tls-private-key-file="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514551 4810 flags.go:64] FLAG: --topology-manager-policy="none" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514556 4810 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514560 4810 flags.go:64] FLAG: --topology-manager-scope="container" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514564 4810 flags.go:64] FLAG: --v="2" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514569 4810 flags.go:64] FLAG: --version="false" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514575 4810 flags.go:64] FLAG: --vmodule="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514580 4810 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514585 4810 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514684 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514689 4810 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514693 4810 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514697 4810 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514701 4810 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514704 4810 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514708 4810 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514712 4810 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514715 4810 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514719 4810 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514723 4810 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514726 4810 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514729 4810 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514733 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514737 4810 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514740 4810 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514744 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514747 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514751 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514755 4810 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514758 4810 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514761 4810 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514765 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514768 4810 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514773 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514776 4810 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514779 4810 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514783 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514786 4810 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514790 4810 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514793 4810 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514797 4810 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514800 4810 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514804 4810 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514807 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514813 4810 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514817 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514820 4810 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514824 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514827 4810 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514831 4810 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514835 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514838 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514842 4810 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514845 4810 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514849 4810 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514853 4810 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514857 4810 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514861 4810 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514864 4810 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514868 4810 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514871 4810 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514875 4810 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514878 4810 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514881 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514886 4810 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514890 4810 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514894 4810 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514898 4810 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514901 4810 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514905 4810 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514909 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514912 4810 feature_gate.go:330] unrecognized feature gate: Example Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514916 4810 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514919 4810 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514923 4810 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514926 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514932 4810 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514937 4810 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514941 4810 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.514945 4810 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.514957 4810 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.528146 4810 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.528189 4810 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528354 4810 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528366 4810 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528376 4810 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528385 4810 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528394 4810 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528402 4810 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528410 4810 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528419 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528427 4810 feature_gate.go:330] unrecognized feature gate: Example Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528435 4810 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528443 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528450 4810 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528458 4810 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528466 4810 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528474 4810 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528482 4810 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528489 4810 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528497 4810 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528505 4810 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528512 4810 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528519 4810 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528530 4810 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528541 4810 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528550 4810 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528559 4810 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528567 4810 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528577 4810 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528589 4810 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528597 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528606 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528616 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528626 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528634 4810 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528642 4810 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528650 4810 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528658 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528667 4810 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528674 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528682 4810 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528692 4810 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528702 4810 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528709 4810 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528717 4810 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528725 4810 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528733 4810 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528740 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528748 4810 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528756 4810 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528764 4810 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528772 4810 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528779 4810 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528789 4810 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528798 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528807 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528817 4810 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528825 4810 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528833 4810 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528840 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528848 4810 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528855 4810 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528864 4810 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528871 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528880 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528888 4810 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528895 4810 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528904 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528912 4810 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528919 4810 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528927 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528934 4810 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.528942 4810 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.528955 4810 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529180 4810 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529235 4810 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529244 4810 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529254 4810 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529262 4810 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529271 4810 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529280 4810 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529288 4810 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529299 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529308 4810 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529316 4810 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529323 4810 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529331 4810 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529339 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529347 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529354 4810 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529362 4810 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529370 4810 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529377 4810 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529385 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529392 4810 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529400 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529408 4810 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529418 4810 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529426 4810 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529433 4810 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529443 4810 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529454 4810 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529463 4810 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529472 4810 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529480 4810 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529489 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529497 4810 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529505 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529514 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529521 4810 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529530 4810 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529537 4810 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529545 4810 feature_gate.go:330] unrecognized feature gate: Example Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529552 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529560 4810 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529568 4810 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529578 4810 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529588 4810 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529596 4810 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529605 4810 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529613 4810 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529621 4810 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529629 4810 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529638 4810 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529646 4810 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529654 4810 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529662 4810 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529670 4810 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529677 4810 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529685 4810 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529693 4810 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529700 4810 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529709 4810 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529717 4810 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529725 4810 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529732 4810 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529739 4810 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529747 4810 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529755 4810 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529763 4810 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529771 4810 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529779 4810 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529787 4810 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529797 4810 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.529807 4810 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.529821 4810 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.530127 4810 server.go:940] "Client rotation is on, will bootstrap in background" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.534436 4810 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.534574 4810 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.535436 4810 server.go:997] "Starting client certificate rotation" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.535472 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.535982 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-23 21:15:13.552541011 +0000 UTC Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.536122 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.543260 4810 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.545434 4810 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.546339 4810 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.554689 4810 log.go:25] "Validated CRI v1 runtime API" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.576888 4810 log.go:25] "Validated CRI v1 image API" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.579254 4810 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.582229 4810 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-10-06-41-53-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.582277 4810 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.606879 4810 manager.go:217] Machine: {Timestamp:2026-01-10 06:46:11.604439797 +0000 UTC m=+0.219932760 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0be3815d-a057-4f47-a377-5918543441fa BootID:f454f6a0-7590-4048-bb2d-55af2f1576d0 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:89:c6:65 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:89:c6:65 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e3:05:df Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cc:07:7b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5a:41:8c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f7:ea:64 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:9b:93:65:32:bc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:a2:a8:ef:36:2c:5d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.607358 4810 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.607727 4810 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.608658 4810 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.609014 4810 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.609086 4810 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.609461 4810 topology_manager.go:138] "Creating topology manager with none policy" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.609479 4810 container_manager_linux.go:303] "Creating device plugin manager" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.609883 4810 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.609955 4810 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.610499 4810 state_mem.go:36] "Initialized new in-memory state store" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.610645 4810 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.612448 4810 kubelet.go:418] "Attempting to sync node with API server" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.612504 4810 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.612576 4810 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.612604 4810 kubelet.go:324] "Adding apiserver pod source" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.612623 4810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.614452 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.614471 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.614578 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.614588 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.615008 4810 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.615525 4810 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.616862 4810 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617684 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617739 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617761 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617779 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617809 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617823 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617836 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617858 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617875 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617888 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617910 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.617923 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.618256 4810 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.619035 4810 server.go:1280] "Started kubelet" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.619381 4810 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.619516 4810 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.620232 4810 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.620343 4810 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 10 06:46:11 crc systemd[1]: Started Kubernetes Kubelet. Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.621450 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.621760 4810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.622104 4810 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.622077 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:06:38.259673784 +0000 UTC Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.622966 4810 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.622989 4810 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.623007 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.623148 4810 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.622625 4810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18894bb57f725364 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-10 06:46:11.618992996 +0000 UTC m=+0.234485929,LastTimestamp:2026-01-10 06:46:11.618992996 +0000 UTC m=+0.234485929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.626146 4810 factory.go:55] Registering systemd factory Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.627459 4810 factory.go:221] Registration of the systemd container factory successfully Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.627591 4810 server.go:460] "Adding debug handlers to kubelet server" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.629157 4810 factory.go:153] Registering CRI-O factory Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.629188 4810 factory.go:221] Registration of the crio container factory successfully Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.629157 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.629292 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.629347 4810 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.629391 4810 factory.go:103] Registering Raw factory Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.629424 4810 manager.go:1196] Started watching for new ooms in manager Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.635171 4810 manager.go:319] Starting recovery of all containers Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644141 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644331 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644368 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644396 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644424 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644443 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644463 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644487 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644516 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644548 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644579 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644607 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644634 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644674 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644707 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644763 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644791 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644818 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644849 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644879 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644907 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644935 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.644964 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645077 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645110 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645140 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645177 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645248 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645282 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645314 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645341 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645368 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645398 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645425 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645450 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645476 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645502 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645529 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645558 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645584 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645611 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645639 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645669 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645694 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645722 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645753 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645781 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645807 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645834 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645862 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645889 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645915 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645954 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.645985 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646016 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646048 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646077 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646104 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646131 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646158 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646187 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646332 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646356 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646381 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646403 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646428 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646454 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646479 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646506 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646532 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646556 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646581 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646605 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646630 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646658 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646683 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646712 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646741 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646768 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646832 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646862 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646891 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.646975 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647002 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647029 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647061 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647089 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647116 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647141 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647169 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647237 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647266 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647296 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647325 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647351 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647378 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647406 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647432 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647460 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647486 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647512 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647539 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647564 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647590 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647627 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647707 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647740 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647770 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647799 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647832 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647861 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647889 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647917 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647944 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647970 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.647996 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648022 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648050 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648082 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648110 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648136 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648162 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648189 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648258 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648283 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648314 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648340 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648366 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648396 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648424 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648450 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648479 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648504 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648530 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648556 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648581 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648606 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648633 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648660 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648688 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648716 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648746 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648772 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648801 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648829 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648856 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648886 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648912 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648938 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.648963 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649042 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649067 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649093 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649120 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649146 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649183 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649245 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649275 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649302 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649329 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649354 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649380 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649407 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649433 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.649461 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651345 4810 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651407 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651439 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651466 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651495 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651526 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651557 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651588 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651621 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651651 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651679 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651708 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651738 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651767 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651799 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651846 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651874 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651900 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651928 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651965 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.651991 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652019 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652043 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652065 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652095 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652120 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652147 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652172 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652242 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652277 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652305 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652337 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652364 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652392 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652417 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652443 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652469 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652493 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652521 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652548 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652576 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652605 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652633 4810 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652661 4810 reconstruct.go:97] "Volume reconstruction finished" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.652680 4810 reconciler.go:26] "Reconciler: start to sync state" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.664266 4810 manager.go:324] Recovery completed Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.678775 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.684302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.684375 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.684399 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.685458 4810 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.685496 4810 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.685534 4810 state_mem.go:36] "Initialized new in-memory state store" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.687499 4810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.691577 4810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.691650 4810 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.691691 4810 kubelet.go:2335] "Starting kubelet main sync loop" Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.691773 4810 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.692494 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.692570 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.697465 4810 policy_none.go:49] "None policy: Start" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.698959 4810 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.698988 4810 state_mem.go:35] "Initializing new in-memory state store" Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.722293 4810 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.760962 4810 manager.go:334] "Starting Device Plugin manager" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.761034 4810 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.761053 4810 server.go:79] "Starting device plugin registration server" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.761678 4810 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.761708 4810 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.761938 4810 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.762090 4810 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.762104 4810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.771651 4810 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.791944 4810 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.792070 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.793494 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.793536 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.793556 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.793723 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.794184 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.794240 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.794914 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.794961 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.794980 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.795183 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.795388 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.795411 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.795607 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.795663 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.795683 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.796324 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.796378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.796336 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.796445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.796464 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.796392 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.796986 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.797177 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.797259 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.798063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.798099 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.798111 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.798344 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.798485 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.798523 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.798919 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.798974 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.798992 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.799543 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.799571 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.799606 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.799754 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.799774 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.800401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.800463 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.800487 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.801223 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.801291 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.801309 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.824656 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.855803 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.855871 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.855936 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.855987 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.856098 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.856149 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.856215 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.856249 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.856311 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.856408 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.856458 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.856501 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.856542 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.856582 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.856615 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.862335 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.863665 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.863712 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.863730 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.863761 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 10 06:46:11 crc kubenswrapper[4810]: E0110 06:46:11.864413 4810 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.957651 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.957748 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.957847 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.957880 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.957943 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.957965 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.957987 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958009 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958043 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958073 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958095 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958109 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958151 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.957940 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958184 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958250 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958258 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958280 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958314 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958345 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958375 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958418 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958460 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958466 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958419 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958512 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958315 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958110 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958216 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.958426 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.961883 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: I0110 06:46:11.971524 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 10 06:46:11 crc kubenswrapper[4810]: W0110 06:46:11.997061 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0e101f4bcc554209832f3020bb0f4f851e4c57ce657eeeb590e13576df861c85 WatchSource:0}: Error finding container 0e101f4bcc554209832f3020bb0f4f851e4c57ce657eeeb590e13576df861c85: Status 404 returned error can't find the container with id 0e101f4bcc554209832f3020bb0f4f851e4c57ce657eeeb590e13576df861c85 Jan 10 06:46:12 crc kubenswrapper[4810]: W0110 06:46:12.001023 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8b5d5d13ff4bf21e9f2e8ff390a1a883bb9499a93f04b4cb05ea57bdaaa746de WatchSource:0}: Error finding container 8b5d5d13ff4bf21e9f2e8ff390a1a883bb9499a93f04b4cb05ea57bdaaa746de: Status 404 returned error can't find the container with id 8b5d5d13ff4bf21e9f2e8ff390a1a883bb9499a93f04b4cb05ea57bdaaa746de Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.064549 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.066699 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.066791 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.066826 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.066869 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 10 06:46:12 crc kubenswrapper[4810]: E0110 06:46:12.067612 4810 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.146628 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.160910 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 10 06:46:12 crc kubenswrapper[4810]: W0110 06:46:12.169215 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a4d7e0bcf8af69078110d03197e2e688124f38ce2e91e81b4051ad05151b6bbe WatchSource:0}: Error finding container a4d7e0bcf8af69078110d03197e2e688124f38ce2e91e81b4051ad05151b6bbe: Status 404 returned error can't find the container with id a4d7e0bcf8af69078110d03197e2e688124f38ce2e91e81b4051ad05151b6bbe Jan 10 06:46:12 crc kubenswrapper[4810]: W0110 06:46:12.178792 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-adab4f4ae340fb75196df9247e365108e9e4561f04ff2c5362a52ac2fd0641cf WatchSource:0}: Error finding container adab4f4ae340fb75196df9247e365108e9e4561f04ff2c5362a52ac2fd0641cf: Status 404 returned error can't find the container with id adab4f4ae340fb75196df9247e365108e9e4561f04ff2c5362a52ac2fd0641cf Jan 10 06:46:12 crc kubenswrapper[4810]: E0110 06:46:12.226436 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.239582 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:12 crc kubenswrapper[4810]: W0110 06:46:12.259875 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-41465f303130c86991e707c93cd3a9c2c8cab89bca2b3f1e1178e783af135ee7 WatchSource:0}: Error finding container 41465f303130c86991e707c93cd3a9c2c8cab89bca2b3f1e1178e783af135ee7: Status 404 returned error can't find the container with id 41465f303130c86991e707c93cd3a9c2c8cab89bca2b3f1e1178e783af135ee7 Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.468747 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.470554 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.470594 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.470604 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.470633 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 10 06:46:12 crc kubenswrapper[4810]: E0110 06:46:12.471061 4810 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 10 06:46:12 crc kubenswrapper[4810]: W0110 06:46:12.613184 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 10 06:46:12 crc kubenswrapper[4810]: E0110 06:46:12.613337 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.620436 4810 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.622736 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:20:06.375792607 +0000 UTC Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.622814 4810 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 101h33m53.752982974s for next certificate rotation Jan 10 06:46:12 crc kubenswrapper[4810]: W0110 06:46:12.671254 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 10 06:46:12 crc kubenswrapper[4810]: E0110 06:46:12.671384 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.698414 4810 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85" exitCode=0 Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.698491 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85"} Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.698611 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"adab4f4ae340fb75196df9247e365108e9e4561f04ff2c5362a52ac2fd0641cf"} Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.698722 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.700108 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.700151 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.700170 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.701954 4810 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566" exitCode=0 Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.702005 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566"} Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.702085 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a4d7e0bcf8af69078110d03197e2e688124f38ce2e91e81b4051ad05151b6bbe"} Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.702270 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.703558 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.703621 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.703641 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.704115 4810 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab" exitCode=0 Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.704275 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab"} Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.704320 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0e101f4bcc554209832f3020bb0f4f851e4c57ce657eeeb590e13576df861c85"} Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.704458 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.705925 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.706019 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.706046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.706323 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498"} Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.706368 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8b5d5d13ff4bf21e9f2e8ff390a1a883bb9499a93f04b4cb05ea57bdaaa746de"} Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.710403 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341" exitCode=0 Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.710456 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341"} Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.710533 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"41465f303130c86991e707c93cd3a9c2c8cab89bca2b3f1e1178e783af135ee7"} Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.710686 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.712677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.712727 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.712744 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.716867 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.719033 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.719075 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:12 crc kubenswrapper[4810]: I0110 06:46:12.719093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:12 crc kubenswrapper[4810]: W0110 06:46:12.915816 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 10 06:46:12 crc kubenswrapper[4810]: E0110 06:46:12.915887 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 10 06:46:13 crc kubenswrapper[4810]: E0110 06:46:13.028284 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Jan 10 06:46:13 crc kubenswrapper[4810]: W0110 06:46:13.059804 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Jan 10 06:46:13 crc kubenswrapper[4810]: E0110 06:46:13.059895 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.271308 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.273930 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.273979 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.273992 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.274027 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 10 06:46:13 crc kubenswrapper[4810]: E0110 06:46:13.274534 4810 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.614835 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.719404 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.719468 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.719496 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.719517 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.723003 4810 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709" exitCode=0 Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.723096 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.723315 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.724926 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.724985 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.725002 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.726152 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c71b7e621e2b39c51ab2a8542fd9754b2747b0fb64701ed2d45ba4cc84479b46"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.726306 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.727465 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.727497 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.727509 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.734115 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.734179 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.734235 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.734364 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.735534 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.735580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.735599 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.737091 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.737136 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.737160 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7"} Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.737227 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.738333 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.738383 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.738401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:13 crc kubenswrapper[4810]: I0110 06:46:13.954446 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.748720 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa"} Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.748788 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.749784 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.749828 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.749845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.753045 4810 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb" exitCode=0 Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.753103 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb"} Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.753239 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.753372 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.754864 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.754911 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.754912 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.754956 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.755005 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.755052 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.875665 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.877093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.877144 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.877162 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:14 crc kubenswrapper[4810]: I0110 06:46:14.877225 4810 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.366487 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.446148 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.762358 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f"} Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.762415 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9"} Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.762441 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.762554 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.762440 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929"} Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.763588 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.763641 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.763660 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.764470 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.764517 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:15 crc kubenswrapper[4810]: I0110 06:46:15.764535 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.477078 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.772664 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e"} Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.772726 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f"} Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.772747 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.772865 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.772876 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.774302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.774353 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.774375 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.774577 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.774613 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.774634 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.774944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.775012 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:16 crc kubenswrapper[4810]: I0110 06:46:16.775032 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.405345 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.405586 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.407743 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.407820 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.407842 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.775515 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.775529 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.779294 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.779356 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.779376 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.780818 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.780993 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:17 crc kubenswrapper[4810]: I0110 06:46:17.782306 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:18 crc kubenswrapper[4810]: I0110 06:46:18.135853 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:18 crc kubenswrapper[4810]: I0110 06:46:18.136115 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:18 crc kubenswrapper[4810]: I0110 06:46:18.137673 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:18 crc kubenswrapper[4810]: I0110 06:46:18.137726 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:18 crc kubenswrapper[4810]: I0110 06:46:18.137742 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:18 crc kubenswrapper[4810]: I0110 06:46:18.143860 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:18 crc kubenswrapper[4810]: I0110 06:46:18.778658 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:18 crc kubenswrapper[4810]: I0110 06:46:18.780727 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:18 crc kubenswrapper[4810]: I0110 06:46:18.780913 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:18 crc kubenswrapper[4810]: I0110 06:46:18.781046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.222886 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.223553 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.225310 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.225537 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.225738 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.260951 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.399930 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.400161 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.401494 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.401563 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.401583 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.781302 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.782645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.782695 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:19 crc kubenswrapper[4810]: I0110 06:46:19.782714 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:20 crc kubenswrapper[4810]: I0110 06:46:20.382558 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 10 06:46:20 crc kubenswrapper[4810]: I0110 06:46:20.382869 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:20 crc kubenswrapper[4810]: I0110 06:46:20.384674 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:20 crc kubenswrapper[4810]: I0110 06:46:20.384736 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:20 crc kubenswrapper[4810]: I0110 06:46:20.384755 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:21 crc kubenswrapper[4810]: E0110 06:46:21.771813 4810 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 10 06:46:22 crc kubenswrapper[4810]: I0110 06:46:22.261829 4810 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 10 06:46:22 crc kubenswrapper[4810]: I0110 06:46:22.261926 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 10 06:46:23 crc kubenswrapper[4810]: E0110 06:46:23.617168 4810 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 10 06:46:23 crc kubenswrapper[4810]: I0110 06:46:23.620566 4810 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 10 06:46:23 crc kubenswrapper[4810]: I0110 06:46:23.961308 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:23 crc kubenswrapper[4810]: I0110 06:46:23.961491 4810 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 10 06:46:23 crc kubenswrapper[4810]: I0110 06:46:23.962983 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:23 crc kubenswrapper[4810]: I0110 06:46:23.963059 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:23 crc kubenswrapper[4810]: I0110 06:46:23.963082 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:24 crc kubenswrapper[4810]: E0110 06:46:24.628933 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 10 06:46:24 crc kubenswrapper[4810]: W0110 06:46:24.723155 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 10 06:46:24 crc kubenswrapper[4810]: I0110 06:46:24.723301 4810 trace.go:236] Trace[262004988]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Jan-2026 06:46:14.721) (total time: 10001ms): Jan 10 06:46:24 crc kubenswrapper[4810]: Trace[262004988]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:46:24.723) Jan 10 06:46:24 crc kubenswrapper[4810]: Trace[262004988]: [10.001578523s] [10.001578523s] END Jan 10 06:46:24 crc kubenswrapper[4810]: E0110 06:46:24.723332 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 10 06:46:24 crc kubenswrapper[4810]: W0110 06:46:24.803249 4810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 10 06:46:24 crc kubenswrapper[4810]: I0110 06:46:24.803362 4810 trace.go:236] Trace[1461556021]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Jan-2026 06:46:14.801) (total time: 10001ms): Jan 10 06:46:24 crc kubenswrapper[4810]: Trace[1461556021]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:46:24.803) Jan 10 06:46:24 crc kubenswrapper[4810]: Trace[1461556021]: [10.00185707s] [10.00185707s] END Jan 10 06:46:24 crc kubenswrapper[4810]: E0110 06:46:24.803392 4810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 10 06:46:24 crc kubenswrapper[4810]: I0110 06:46:24.841181 4810 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 10 06:46:24 crc kubenswrapper[4810]: I0110 06:46:24.841278 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 10 06:46:24 crc kubenswrapper[4810]: I0110 06:46:24.847075 4810 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 10 06:46:24 crc kubenswrapper[4810]: I0110 06:46:24.847147 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 10 06:46:27 crc kubenswrapper[4810]: I0110 06:46:27.777064 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 10 06:46:27 crc kubenswrapper[4810]: I0110 06:46:27.792853 4810 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 10 06:46:27 crc kubenswrapper[4810]: I0110 06:46:27.887109 4810 csr.go:261] certificate signing request csr-ml9h8 is approved, waiting to be issued Jan 10 06:46:27 crc kubenswrapper[4810]: I0110 06:46:27.915750 4810 csr.go:257] certificate signing request csr-ml9h8 is issued Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.315685 4810 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.621412 4810 apiserver.go:52] "Watching apiserver" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.625540 4810 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.626027 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.626508 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.626614 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:28 crc kubenswrapper[4810]: E0110 06:46:28.626687 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.626821 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.626967 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 10 06:46:28 crc kubenswrapper[4810]: E0110 06:46:28.627049 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.627088 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.627013 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 10 06:46:28 crc kubenswrapper[4810]: E0110 06:46:28.627417 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.629141 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.629631 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.629787 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.630468 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.630478 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.630481 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.630774 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.630794 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.632910 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.676160 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.694491 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.709087 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.724061 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.724847 4810 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.737268 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.750156 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.764885 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.778897 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.917296 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-10 06:41:27 +0000 UTC, rotation deadline is 2026-10-20 14:39:21.419169332 +0000 UTC Jan 10 06:46:28 crc kubenswrapper[4810]: I0110 06:46:28.917366 4810 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6799h52m52.501806387s for next certificate rotation Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.227618 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.233868 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.241779 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.242175 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.252785 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.261809 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.273584 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.285798 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.301476 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.311308 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.323253 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.338412 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.348171 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.362559 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.372971 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.381354 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.809934 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.843526 4810 trace.go:236] Trace[712253002]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Jan-2026 06:46:15.789) (total time: 14054ms): Jan 10 06:46:29 crc kubenswrapper[4810]: Trace[712253002]: ---"Objects listed" error: 14054ms (06:46:29.843) Jan 10 06:46:29 crc kubenswrapper[4810]: Trace[712253002]: [14.054239403s] [14.054239403s] END Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.843574 4810 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.844412 4810 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.844778 4810 trace.go:236] Trace[1851827234]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (10-Jan-2026 06:46:15.859) (total time: 13985ms): Jan 10 06:46:29 crc kubenswrapper[4810]: Trace[1851827234]: ---"Objects listed" error: 13985ms (06:46:29.844) Jan 10 06:46:29 crc kubenswrapper[4810]: Trace[1851827234]: [13.985464521s] [13.985464521s] END Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.844808 4810 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.848867 4810 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.849133 4810 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.850370 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.850495 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.850576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.850648 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.850713 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:29Z","lastTransitionTime":"2026-01-10T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:29 crc kubenswrapper[4810]: E0110 06:46:29.865443 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.869674 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.869729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.869742 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.869760 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.869772 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:29Z","lastTransitionTime":"2026-01-10T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:29 crc kubenswrapper[4810]: E0110 06:46:29.880541 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.884110 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.887580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.887614 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.887624 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.887640 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.887649 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:29Z","lastTransitionTime":"2026-01-10T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.894706 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: E0110 06:46:29.909404 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.913654 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.914557 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.914600 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.914615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.914638 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.914649 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:29Z","lastTransitionTime":"2026-01-10T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:29 crc kubenswrapper[4810]: E0110 06:46:29.923103 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.923541 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.926650 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.926768 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.926881 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.926959 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.927029 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:29Z","lastTransitionTime":"2026-01-10T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.935281 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: E0110 06:46:29.940563 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: E0110 06:46:29.940715 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.942247 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.942289 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.942301 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.942317 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.942332 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:29Z","lastTransitionTime":"2026-01-10T06:46:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945585 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945627 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945645 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945662 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945678 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945695 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945710 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945726 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945744 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945761 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945775 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945792 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945807 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945823 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945837 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945887 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945903 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945919 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945934 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945950 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945980 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.945994 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946010 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946025 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946041 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946056 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946088 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946106 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946120 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946135 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946150 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946163 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946184 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946217 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946248 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946262 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946276 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946290 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946305 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946319 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946335 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946351 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946366 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946382 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946398 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946413 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946427 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946442 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946456 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946471 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946485 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946502 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946518 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946506 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946539 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946571 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946587 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946604 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946620 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946635 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946649 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946666 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946680 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946716 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946753 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946767 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946793 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946807 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946832 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946848 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946944 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946959 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946974 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.946988 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.947115 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.947131 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.947146 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.947161 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.947177 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948134 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948180 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948235 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948236 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948284 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948309 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948361 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948379 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948404 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948424 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948439 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948460 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948497 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948438 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948807 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.952907 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.953018 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.948942 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.949070 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.949228 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.949237 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.949474 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.949420 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.949687 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.949773 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.949794 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.949830 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.953170 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950004 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950039 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950120 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950173 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950183 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950221 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950323 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950462 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.953454 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.953466 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.953556 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950498 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950599 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.953620 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.953726 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.953743 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950769 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950782 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950796 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.950836 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.951010 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.951124 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.951183 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.951505 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.951960 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.952184 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.952263 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.952393 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.952853 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.952873 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.953570 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.954004 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.953877 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.954067 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.954303 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.954315 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.954369 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.954387 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.954509 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.954590 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.954735 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.954818 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.954852 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.955060 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.955655 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.956144 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.956651 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.956671 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.956979 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.957776 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.957830 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.958384 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.958691 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.961299 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.961615 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.961721 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.961809 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.961885 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.961957 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962028 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962099 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962170 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962273 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962344 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962414 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962547 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962632 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962698 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962773 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962847 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962956 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.961664 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.961857 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.962294 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.963216 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.963344 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.964269 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.964329 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.964577 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.964632 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.964873 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.964979 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.966547 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.967590 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.969081 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.969420 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.969486 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.969513 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.969555 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.969609 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.969756 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.970880 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: E0110 06:46:29.969789 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:46:30.463563702 +0000 UTC m=+19.079056585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971160 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971172 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.966076 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.969809 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.969934 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971237 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971278 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971315 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971346 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971394 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971427 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971460 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971491 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971523 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.969971 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.970508 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971242 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971517 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971564 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971678 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971711 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971736 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971761 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971784 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971807 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971829 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971856 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971884 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971916 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971940 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971963 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971986 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972010 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972034 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972058 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972080 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972104 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972130 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972155 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972181 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972227 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972271 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972295 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972321 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972360 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972386 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972413 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972438 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972464 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972490 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972525 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972552 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972576 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972601 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972629 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972668 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972695 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972721 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972747 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972772 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972794 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972816 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972843 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972869 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972895 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972919 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972966 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972993 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973018 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973049 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973077 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973101 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973123 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973148 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973174 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973220 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973248 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973273 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973299 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973325 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973349 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973375 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973397 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973424 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973448 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973473 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973498 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973521 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973549 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973580 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973604 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973626 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973648 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973670 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973696 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973720 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973773 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973801 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973828 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973858 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973887 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973911 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973937 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973967 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973996 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974025 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974053 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974082 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974110 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974137 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974253 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974272 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974285 4810 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974296 4810 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974308 4810 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974319 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974330 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974342 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974353 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974364 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974376 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974388 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974399 4810 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974410 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974421 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974431 4810 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974443 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974455 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974469 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974482 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974494 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974507 4810 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974521 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974534 4810 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974546 4810 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974558 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974572 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974585 4810 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974598 4810 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974610 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974623 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974636 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974649 4810 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974661 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974673 4810 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974684 4810 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974696 4810 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974708 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974723 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974737 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974750 4810 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974762 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974776 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974789 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974801 4810 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974814 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974826 4810 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974840 4810 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974853 4810 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974867 4810 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974880 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974893 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974906 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974919 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974932 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974944 4810 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974957 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974972 4810 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974985 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974999 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975012 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975026 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975039 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975080 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975094 4810 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975106 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975119 4810 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975132 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975145 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975158 4810 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975173 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975188 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975272 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975300 4810 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975313 4810 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975326 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975341 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975354 4810 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975368 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975381 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975393 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975407 4810 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975419 4810 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975431 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975444 4810 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975456 4810 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975469 4810 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975482 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975494 4810 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975508 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975520 4810 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975532 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975544 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975557 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975572 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975584 4810 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975597 4810 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975611 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.971678 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972146 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972781 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.972839 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973220 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973366 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.976759 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.973609 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.974265 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975826 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.975891 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.976116 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.976284 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.976390 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.976481 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.977247 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.977783 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.978110 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.978162 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.978181 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.978454 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.978655 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.978849 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.979017 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.979041 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.979062 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.979403 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.979417 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.979417 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.979521 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.979818 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.979971 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.980083 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.980362 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.980432 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.980539 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.980714 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.980784 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.980939 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.981003 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.981129 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.981176 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.976734 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.981503 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.981520 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: E0110 06:46:29.981780 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:29 crc kubenswrapper[4810]: E0110 06:46:29.981902 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:30.481886789 +0000 UTC m=+19.097379662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.982134 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.982565 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.983358 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.983519 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.983900 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.983976 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.984175 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.984422 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.984936 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.985064 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.985308 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.985369 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.985465 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.985708 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.985772 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: E0110 06:46:29.985988 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:29 crc kubenswrapper[4810]: E0110 06:46:29.986068 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:30.486046594 +0000 UTC m=+19.101539557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.986152 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.986283 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.986497 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.986768 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.986788 4810 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.987032 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.987213 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.987373 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.987502 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.987332 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.987989 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.985382 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.988157 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.990435 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.991753 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.994730 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.994977 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.995066 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.995482 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.995545 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.997655 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.997835 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.997870 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.998990 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:29 crc kubenswrapper[4810]: I0110 06:46:29.999219 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.000305 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.000334 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.000576 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.000624 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.000649 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.000803 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.000868 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.000901 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.000932 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.000952 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.000963 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.000982 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.001037 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:30.501020625 +0000 UTC m=+19.116513518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.004059 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.005316 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.005780 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.010242 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.010280 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.010301 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.010387 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:30.510348948 +0000 UTC m=+19.125841841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.012848 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.013062 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.013885 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.014846 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.021435 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-w95z8"] Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.021794 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w95z8" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.022379 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.023279 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.029792 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.030011 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.031691 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.045555 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.045583 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.045592 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.045605 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.045613 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:30Z","lastTransitionTime":"2026-01-10T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.050864 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.061401 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.070330 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.078806 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079067 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073-hosts-file\") pod \"node-resolver-w95z8\" (UID: \"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\") " pod="openshift-dns/node-resolver-w95z8" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079114 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079136 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079183 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlbq\" (UniqueName: \"kubernetes.io/projected/11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073-kube-api-access-cqlbq\") pod \"node-resolver-w95z8\" (UID: \"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\") " pod="openshift-dns/node-resolver-w95z8" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079276 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079295 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079309 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079322 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079334 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079345 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079357 4810 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079367 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079378 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079389 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079399 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079409 4810 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079420 4810 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079430 4810 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079439 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079450 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079461 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079471 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079481 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079491 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079501 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079511 4810 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079522 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079533 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079546 4810 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079556 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079567 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079577 4810 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079621 4810 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079632 4810 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079643 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079653 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079664 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079676 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079687 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079697 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079707 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079718 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079729 4810 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079741 4810 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079756 4810 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079769 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079780 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079791 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079802 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079813 4810 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079825 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079836 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079848 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079859 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079870 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079882 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079894 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079905 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079915 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079926 4810 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079938 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079949 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079962 4810 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079973 4810 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079985 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.079995 4810 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080006 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080017 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080028 4810 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080039 4810 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080051 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080062 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080074 4810 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080087 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080099 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080109 4810 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080120 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080131 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080142 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080154 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080165 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080175 4810 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080185 4810 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080366 4810 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080381 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080379 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080392 4810 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080424 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080437 4810 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080441 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080449 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080461 4810 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080474 4810 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080489 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080500 4810 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080512 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080523 4810 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080533 4810 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080543 4810 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080554 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080566 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.080577 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.089011 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.113624 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.124600 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.132844 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.140577 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.146744 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.147874 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.147905 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.147917 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.147931 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.147969 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:30Z","lastTransitionTime":"2026-01-10T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.148045 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.153438 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.159044 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.166549 4810 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 10 06:46:30 crc kubenswrapper[4810]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 10 06:46:30 crc kubenswrapper[4810]: set -o allexport Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: source /etc/kubernetes/apiserver-url.env Jan 10 06:46:30 crc kubenswrapper[4810]: else Jan 10 06:46:30 crc kubenswrapper[4810]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 10 06:46:30 crc kubenswrapper[4810]: exit 1 Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 10 06:46:30 crc kubenswrapper[4810]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 10 06:46:30 crc kubenswrapper[4810]: > logger="UnhandledError" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.167807 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.167918 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 10 06:46:30 crc kubenswrapper[4810]: W0110 06:46:30.177187 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-6ce62f0b997fdef08c8a3a092d24f72e12e38cb04cb46eb24b70f3bb4fe83bdb WatchSource:0}: Error finding container 6ce62f0b997fdef08c8a3a092d24f72e12e38cb04cb46eb24b70f3bb4fe83bdb: Status 404 returned error can't find the container with id 6ce62f0b997fdef08c8a3a092d24f72e12e38cb04cb46eb24b70f3bb4fe83bdb Jan 10 06:46:30 crc kubenswrapper[4810]: W0110 06:46:30.178021 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-bd83a1c502735c1ad215c74fd1625ddcb87f575c62ae7350b3384a8268a739b2 WatchSource:0}: Error finding container bd83a1c502735c1ad215c74fd1625ddcb87f575c62ae7350b3384a8268a739b2: Status 404 returned error can't find the container with id bd83a1c502735c1ad215c74fd1625ddcb87f575c62ae7350b3384a8268a739b2 Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.179440 4810 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 10 06:46:30 crc kubenswrapper[4810]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ -f "/env/_master" ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: set -o allexport Jan 10 06:46:30 crc kubenswrapper[4810]: source "/env/_master" Jan 10 06:46:30 crc kubenswrapper[4810]: set +o allexport Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 10 06:46:30 crc kubenswrapper[4810]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 10 06:46:30 crc kubenswrapper[4810]: ho_enable="--enable-hybrid-overlay" Jan 10 06:46:30 crc kubenswrapper[4810]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 10 06:46:30 crc kubenswrapper[4810]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 10 06:46:30 crc kubenswrapper[4810]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 10 06:46:30 crc kubenswrapper[4810]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 10 06:46:30 crc kubenswrapper[4810]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 10 06:46:30 crc kubenswrapper[4810]: --webhook-host=127.0.0.1 \ Jan 10 06:46:30 crc kubenswrapper[4810]: --webhook-port=9743 \ Jan 10 06:46:30 crc kubenswrapper[4810]: ${ho_enable} \ Jan 10 06:46:30 crc kubenswrapper[4810]: --enable-interconnect \ Jan 10 06:46:30 crc kubenswrapper[4810]: --disable-approver \ Jan 10 06:46:30 crc kubenswrapper[4810]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 10 06:46:30 crc kubenswrapper[4810]: --wait-for-kubernetes-api=200s \ Jan 10 06:46:30 crc kubenswrapper[4810]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 10 06:46:30 crc kubenswrapper[4810]: --loglevel="${LOGLEVEL}" Jan 10 06:46:30 crc kubenswrapper[4810]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 10 06:46:30 crc kubenswrapper[4810]: > logger="UnhandledError" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.182687 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.182822 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqlbq\" (UniqueName: \"kubernetes.io/projected/11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073-kube-api-access-cqlbq\") pod \"node-resolver-w95z8\" (UID: \"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\") " pod="openshift-dns/node-resolver-w95z8" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.182872 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073-hosts-file\") pod \"node-resolver-w95z8\" (UID: \"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\") " pod="openshift-dns/node-resolver-w95z8" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.182957 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073-hosts-file\") pod \"node-resolver-w95z8\" (UID: \"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\") " pod="openshift-dns/node-resolver-w95z8" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.184065 4810 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 10 06:46:30 crc kubenswrapper[4810]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ -f "/env/_master" ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: set -o allexport Jan 10 06:46:30 crc kubenswrapper[4810]: source "/env/_master" Jan 10 06:46:30 crc kubenswrapper[4810]: set +o allexport Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 10 06:46:30 crc kubenswrapper[4810]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 10 06:46:30 crc kubenswrapper[4810]: --disable-webhook \ Jan 10 06:46:30 crc kubenswrapper[4810]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 10 06:46:30 crc kubenswrapper[4810]: --loglevel="${LOGLEVEL}" Jan 10 06:46:30 crc kubenswrapper[4810]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 10 06:46:30 crc kubenswrapper[4810]: > logger="UnhandledError" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.184485 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.186106 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.202923 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqlbq\" (UniqueName: \"kubernetes.io/projected/11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073-kube-api-access-cqlbq\") pod \"node-resolver-w95z8\" (UID: \"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\") " pod="openshift-dns/node-resolver-w95z8" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.228015 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.232316 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.234447 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.238420 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.246675 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.249927 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.249949 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.249959 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.249974 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.249987 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:30Z","lastTransitionTime":"2026-01-10T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.255597 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.263918 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.273859 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.283354 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.290218 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.299245 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.309899 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.318832 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.332137 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.339758 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.346690 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-w95z8" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.347119 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.351629 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.351663 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.351676 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.351693 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.351704 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:30Z","lastTransitionTime":"2026-01-10T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.357482 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: W0110 06:46:30.360732 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11fe83f0_d81e_4bb1_8ba5_a4a41a5b4073.slice/crio-cf80c18a0076a8543e5668032df7cd32e16c404fe08e6fcf4bb8e6dcc1f8b6bb WatchSource:0}: Error finding container cf80c18a0076a8543e5668032df7cd32e16c404fe08e6fcf4bb8e6dcc1f8b6bb: Status 404 returned error can't find the container with id cf80c18a0076a8543e5668032df7cd32e16c404fe08e6fcf4bb8e6dcc1f8b6bb Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.363422 4810 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 10 06:46:30 crc kubenswrapper[4810]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Jan 10 06:46:30 crc kubenswrapper[4810]: set -uo pipefail Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 10 06:46:30 crc kubenswrapper[4810]: HOSTS_FILE="/etc/hosts" Jan 10 06:46:30 crc kubenswrapper[4810]: TEMP_FILE="/etc/hosts.tmp" Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: # Make a temporary file with the old hosts file's attributes. Jan 10 06:46:30 crc kubenswrapper[4810]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 10 06:46:30 crc kubenswrapper[4810]: echo "Failed to preserve hosts file. Exiting." Jan 10 06:46:30 crc kubenswrapper[4810]: exit 1 Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: while true; do Jan 10 06:46:30 crc kubenswrapper[4810]: declare -A svc_ips Jan 10 06:46:30 crc kubenswrapper[4810]: for svc in "${services[@]}"; do Jan 10 06:46:30 crc kubenswrapper[4810]: # Fetch service IP from cluster dns if present. We make several tries Jan 10 06:46:30 crc kubenswrapper[4810]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 10 06:46:30 crc kubenswrapper[4810]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 10 06:46:30 crc kubenswrapper[4810]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 10 06:46:30 crc kubenswrapper[4810]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 10 06:46:30 crc kubenswrapper[4810]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 10 06:46:30 crc kubenswrapper[4810]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 10 06:46:30 crc kubenswrapper[4810]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 10 06:46:30 crc kubenswrapper[4810]: for i in ${!cmds[*]} Jan 10 06:46:30 crc kubenswrapper[4810]: do Jan 10 06:46:30 crc kubenswrapper[4810]: ips=($(eval "${cmds[i]}")) Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: svc_ips["${svc}"]="${ips[@]}" Jan 10 06:46:30 crc kubenswrapper[4810]: break Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: done Jan 10 06:46:30 crc kubenswrapper[4810]: done Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: # Update /etc/hosts only if we get valid service IPs Jan 10 06:46:30 crc kubenswrapper[4810]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 10 06:46:30 crc kubenswrapper[4810]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 10 06:46:30 crc kubenswrapper[4810]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 10 06:46:30 crc kubenswrapper[4810]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 10 06:46:30 crc kubenswrapper[4810]: sleep 60 & wait Jan 10 06:46:30 crc kubenswrapper[4810]: continue Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: # Append resolver entries for services Jan 10 06:46:30 crc kubenswrapper[4810]: rc=0 Jan 10 06:46:30 crc kubenswrapper[4810]: for svc in "${!svc_ips[@]}"; do Jan 10 06:46:30 crc kubenswrapper[4810]: for ip in ${svc_ips[${svc}]}; do Jan 10 06:46:30 crc kubenswrapper[4810]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 10 06:46:30 crc kubenswrapper[4810]: done Jan 10 06:46:30 crc kubenswrapper[4810]: done Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ $rc -ne 0 ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: sleep 60 & wait Jan 10 06:46:30 crc kubenswrapper[4810]: continue Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 10 06:46:30 crc kubenswrapper[4810]: # Replace /etc/hosts with our modified version if needed Jan 10 06:46:30 crc kubenswrapper[4810]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 10 06:46:30 crc kubenswrapper[4810]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: sleep 60 & wait Jan 10 06:46:30 crc kubenswrapper[4810]: unset svc_ips Jan 10 06:46:30 crc kubenswrapper[4810]: done Jan 10 06:46:30 crc kubenswrapper[4810]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqlbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-w95z8_openshift-dns(11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 10 06:46:30 crc kubenswrapper[4810]: > logger="UnhandledError" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.366782 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-w95z8" podUID="11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.369563 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.378134 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.385112 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.412007 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.420638 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.429309 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.430933 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.437834 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.446028 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.454538 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.454594 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.454612 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.454636 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.454654 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:30Z","lastTransitionTime":"2026-01-10T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.457222 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.467472 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.476324 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.484831 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.484933 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.485057 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.485141 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:31.485117669 +0000 UTC m=+20.100610572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.485176 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:46:31.48516238 +0000 UTC m=+20.100655283 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.488944 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.497912 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.507395 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.530798 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.546736 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.556956 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.557009 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.557025 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.557051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.557068 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:30Z","lastTransitionTime":"2026-01-10T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.560090 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.571710 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.580891 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.585516 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.585708 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.585832 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.585760 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.586064 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:31.586045061 +0000 UTC m=+20.201537964 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.585894 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.586306 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.586390 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.586486 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:31.586476021 +0000 UTC m=+20.201968914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.585971 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.586666 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.586759 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.586863 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:31.5868518 +0000 UTC m=+20.202344693 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.593416 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.601840 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.611802 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.622711 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.631403 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.659704 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.659731 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.659739 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.659770 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.659780 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:30Z","lastTransitionTime":"2026-01-10T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.692011 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.692073 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.692011 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.692139 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.692182 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.692258 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.762390 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.762444 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.762462 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.762482 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.762497 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:30Z","lastTransitionTime":"2026-01-10T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.812551 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6ce62f0b997fdef08c8a3a092d24f72e12e38cb04cb46eb24b70f3bb4fe83bdb"} Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.813947 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w95z8" event={"ID":"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073","Type":"ContainerStarted","Data":"cf80c18a0076a8543e5668032df7cd32e16c404fe08e6fcf4bb8e6dcc1f8b6bb"} Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.815089 4810 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 10 06:46:30 crc kubenswrapper[4810]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ -f "/env/_master" ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: set -o allexport Jan 10 06:46:30 crc kubenswrapper[4810]: source "/env/_master" Jan 10 06:46:30 crc kubenswrapper[4810]: set +o allexport Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 10 06:46:30 crc kubenswrapper[4810]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 10 06:46:30 crc kubenswrapper[4810]: ho_enable="--enable-hybrid-overlay" Jan 10 06:46:30 crc kubenswrapper[4810]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 10 06:46:30 crc kubenswrapper[4810]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 10 06:46:30 crc kubenswrapper[4810]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 10 06:46:30 crc kubenswrapper[4810]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 10 06:46:30 crc kubenswrapper[4810]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 10 06:46:30 crc kubenswrapper[4810]: --webhook-host=127.0.0.1 \ Jan 10 06:46:30 crc kubenswrapper[4810]: --webhook-port=9743 \ Jan 10 06:46:30 crc kubenswrapper[4810]: ${ho_enable} \ Jan 10 06:46:30 crc kubenswrapper[4810]: --enable-interconnect \ Jan 10 06:46:30 crc kubenswrapper[4810]: --disable-approver \ Jan 10 06:46:30 crc kubenswrapper[4810]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 10 06:46:30 crc kubenswrapper[4810]: --wait-for-kubernetes-api=200s \ Jan 10 06:46:30 crc kubenswrapper[4810]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 10 06:46:30 crc kubenswrapper[4810]: --loglevel="${LOGLEVEL}" Jan 10 06:46:30 crc kubenswrapper[4810]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 10 06:46:30 crc kubenswrapper[4810]: > logger="UnhandledError" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.815659 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bd83a1c502735c1ad215c74fd1625ddcb87f575c62ae7350b3384a8268a739b2"} Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.815586 4810 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 10 06:46:30 crc kubenswrapper[4810]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Jan 10 06:46:30 crc kubenswrapper[4810]: set -uo pipefail Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Jan 10 06:46:30 crc kubenswrapper[4810]: HOSTS_FILE="/etc/hosts" Jan 10 06:46:30 crc kubenswrapper[4810]: TEMP_FILE="/etc/hosts.tmp" Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: IFS=', ' read -r -a services <<< "${SERVICES}" Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: # Make a temporary file with the old hosts file's attributes. Jan 10 06:46:30 crc kubenswrapper[4810]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Jan 10 06:46:30 crc kubenswrapper[4810]: echo "Failed to preserve hosts file. Exiting." Jan 10 06:46:30 crc kubenswrapper[4810]: exit 1 Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: while true; do Jan 10 06:46:30 crc kubenswrapper[4810]: declare -A svc_ips Jan 10 06:46:30 crc kubenswrapper[4810]: for svc in "${services[@]}"; do Jan 10 06:46:30 crc kubenswrapper[4810]: # Fetch service IP from cluster dns if present. We make several tries Jan 10 06:46:30 crc kubenswrapper[4810]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Jan 10 06:46:30 crc kubenswrapper[4810]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Jan 10 06:46:30 crc kubenswrapper[4810]: # support UDP loadbalancers and require reaching DNS through TCP. Jan 10 06:46:30 crc kubenswrapper[4810]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 10 06:46:30 crc kubenswrapper[4810]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 10 06:46:30 crc kubenswrapper[4810]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Jan 10 06:46:30 crc kubenswrapper[4810]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Jan 10 06:46:30 crc kubenswrapper[4810]: for i in ${!cmds[*]} Jan 10 06:46:30 crc kubenswrapper[4810]: do Jan 10 06:46:30 crc kubenswrapper[4810]: ips=($(eval "${cmds[i]}")) Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: svc_ips["${svc}"]="${ips[@]}" Jan 10 06:46:30 crc kubenswrapper[4810]: break Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: done Jan 10 06:46:30 crc kubenswrapper[4810]: done Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: # Update /etc/hosts only if we get valid service IPs Jan 10 06:46:30 crc kubenswrapper[4810]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Jan 10 06:46:30 crc kubenswrapper[4810]: # Stale entries could exist in /etc/hosts if the service is deleted Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ -n "${svc_ips[*]-}" ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Jan 10 06:46:30 crc kubenswrapper[4810]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Jan 10 06:46:30 crc kubenswrapper[4810]: # Only continue rebuilding the hosts entries if its original content is preserved Jan 10 06:46:30 crc kubenswrapper[4810]: sleep 60 & wait Jan 10 06:46:30 crc kubenswrapper[4810]: continue Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: # Append resolver entries for services Jan 10 06:46:30 crc kubenswrapper[4810]: rc=0 Jan 10 06:46:30 crc kubenswrapper[4810]: for svc in "${!svc_ips[@]}"; do Jan 10 06:46:30 crc kubenswrapper[4810]: for ip in ${svc_ips[${svc}]}; do Jan 10 06:46:30 crc kubenswrapper[4810]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Jan 10 06:46:30 crc kubenswrapper[4810]: done Jan 10 06:46:30 crc kubenswrapper[4810]: done Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ $rc -ne 0 ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: sleep 60 & wait Jan 10 06:46:30 crc kubenswrapper[4810]: continue Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Jan 10 06:46:30 crc kubenswrapper[4810]: # Replace /etc/hosts with our modified version if needed Jan 10 06:46:30 crc kubenswrapper[4810]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Jan 10 06:46:30 crc kubenswrapper[4810]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: sleep 60 & wait Jan 10 06:46:30 crc kubenswrapper[4810]: unset svc_ips Jan 10 06:46:30 crc kubenswrapper[4810]: done Jan 10 06:46:30 crc kubenswrapper[4810]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqlbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-w95z8_openshift-dns(11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 10 06:46:30 crc kubenswrapper[4810]: > logger="UnhandledError" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.816708 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a487937617b79b7db84596c12ea6fb8a0da92c4aa666b07db88e96860c3682ef"} Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.817477 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-w95z8" podUID="11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.818289 4810 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 10 06:46:30 crc kubenswrapper[4810]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 10 06:46:30 crc kubenswrapper[4810]: set -o allexport Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: source /etc/kubernetes/apiserver-url.env Jan 10 06:46:30 crc kubenswrapper[4810]: else Jan 10 06:46:30 crc kubenswrapper[4810]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 10 06:46:30 crc kubenswrapper[4810]: exit 1 Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 10 06:46:30 crc kubenswrapper[4810]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 10 06:46:30 crc kubenswrapper[4810]: > logger="UnhandledError" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.818870 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.819496 4810 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 10 06:46:30 crc kubenswrapper[4810]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 10 06:46:30 crc kubenswrapper[4810]: if [[ -f "/env/_master" ]]; then Jan 10 06:46:30 crc kubenswrapper[4810]: set -o allexport Jan 10 06:46:30 crc kubenswrapper[4810]: source "/env/_master" Jan 10 06:46:30 crc kubenswrapper[4810]: set +o allexport Jan 10 06:46:30 crc kubenswrapper[4810]: fi Jan 10 06:46:30 crc kubenswrapper[4810]: Jan 10 06:46:30 crc kubenswrapper[4810]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 10 06:46:30 crc kubenswrapper[4810]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 10 06:46:30 crc kubenswrapper[4810]: --disable-webhook \ Jan 10 06:46:30 crc kubenswrapper[4810]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 10 06:46:30 crc kubenswrapper[4810]: --loglevel="${LOGLEVEL}" Jan 10 06:46:30 crc kubenswrapper[4810]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 10 06:46:30 crc kubenswrapper[4810]: > logger="UnhandledError" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.819817 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.820304 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 10 06:46:30 crc kubenswrapper[4810]: E0110 06:46:30.821289 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.844340 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.854033 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.862684 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.865018 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.865055 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.865066 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.865081 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.865092 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:30Z","lastTransitionTime":"2026-01-10T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.872093 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.886310 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.902399 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.916938 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.927090 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.942313 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.949896 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.959332 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.967037 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.967090 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.967104 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.967122 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.967133 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:30Z","lastTransitionTime":"2026-01-10T06:46:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.967442 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.976807 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:30 crc kubenswrapper[4810]: I0110 06:46:30.988529 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.002725 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.009372 4810 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.015528 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.029792 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.038255 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.045640 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.068762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.068962 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.069041 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.069109 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.069165 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:31Z","lastTransitionTime":"2026-01-10T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.071552 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.172030 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.172286 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.172381 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.172445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.172506 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:31Z","lastTransitionTime":"2026-01-10T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.278241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.278296 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.278317 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.278340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.278356 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:31Z","lastTransitionTime":"2026-01-10T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.380633 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.380828 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.380903 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.380975 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.381041 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:31Z","lastTransitionTime":"2026-01-10T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.483502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.483543 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.483555 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.483574 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.483586 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:31Z","lastTransitionTime":"2026-01-10T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.492377 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.492471 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.492580 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.492623 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:46:33.492589132 +0000 UTC m=+22.108082045 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.492665 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:33.492652393 +0000 UTC m=+22.108145316 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.536127 4810 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 10 06:46:31 crc kubenswrapper[4810]: W0110 06:46:31.537413 4810 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.585483 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.585699 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.585807 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.585889 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.585962 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:31Z","lastTransitionTime":"2026-01-10T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.593461 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.593502 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.593523 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.593620 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.593664 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:33.593651437 +0000 UTC m=+22.209144320 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.593984 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.594003 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.594013 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.594047 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:33.594038246 +0000 UTC m=+22.209531129 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.594275 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.594370 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.594442 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:31 crc kubenswrapper[4810]: E0110 06:46:31.594575 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:33.594552797 +0000 UTC m=+22.210045760 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.619177 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-8c5qp"] Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.619524 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.620181 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-t7gh2"] Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.620403 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.621139 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.621446 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.621552 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.621734 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.621830 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.622417 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.622636 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.622738 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.624437 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.625714 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.629920 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.638290 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.646489 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.657268 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.666270 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.677114 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.688180 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.688419 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.688486 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.688550 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.688435 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.688616 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:31Z","lastTransitionTime":"2026-01-10T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694647 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-cni-dir\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694694 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-os-release\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694730 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-socket-dir-parent\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694754 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-var-lib-cni-bin\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694773 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-run-k8s-cni-cncf-io\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694789 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-daemon-config\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694806 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv7jd\" (UniqueName: \"kubernetes.io/projected/b5b79429-9259-412f-bab8-27865ab7029b-kube-api-access-mv7jd\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694820 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-system-cni-dir\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694833 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-var-lib-cni-multus\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694847 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-run-multus-certs\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694862 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5b79429-9259-412f-bab8-27865ab7029b-proxy-tls\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.694978 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-var-lib-kubelet\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.695020 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcjhj\" (UniqueName: \"kubernetes.io/projected/34d87e8a-cdfb-46ed-97db-2d07cffec516-kube-api-access-jcjhj\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.695044 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5b79429-9259-412f-bab8-27865ab7029b-mcd-auth-proxy-config\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.695065 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-etc-kubernetes\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.695081 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b5b79429-9259-412f-bab8-27865ab7029b-rootfs\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.695095 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-hostroot\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.695111 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-cnibin\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.695125 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34d87e8a-cdfb-46ed-97db-2d07cffec516-cni-binary-copy\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.695144 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-run-netns\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.695158 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-conf-dir\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.696622 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.697176 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.698354 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.699008 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.700020 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.700558 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.701149 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.702065 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.702751 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.704323 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.704851 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.705944 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.706484 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.707118 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.709764 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.710382 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.711461 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.711886 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.712615 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.713628 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.714117 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.715081 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.715602 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.716658 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.717100 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.718473 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.719708 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.720628 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.721610 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.722093 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.722290 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.723284 4810 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.723460 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.725111 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.726000 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.726595 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.728145 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.728865 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.729946 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.730602 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.731689 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.732177 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.733329 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.733416 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.734054 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.735072 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.735641 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.736567 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.737396 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.738503 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.739001 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.739881 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.740398 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.741481 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.742097 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.742614 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.749920 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.765882 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.781328 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.790615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.790640 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.790649 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.790661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.790671 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:31Z","lastTransitionTime":"2026-01-10T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.792002 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795446 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-cni-dir\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795558 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-os-release\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795630 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-os-release\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795621 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-cni-dir\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795723 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-socket-dir-parent\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795776 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-var-lib-cni-bin\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795810 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-run-k8s-cni-cncf-io\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795827 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-daemon-config\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795838 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-var-lib-cni-bin\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795846 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-system-cni-dir\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795872 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-system-cni-dir\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795873 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-var-lib-cni-multus\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795885 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-run-k8s-cni-cncf-io\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795905 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-run-multus-certs\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795927 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-run-multus-certs\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795934 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5b79429-9259-412f-bab8-27865ab7029b-proxy-tls\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795957 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv7jd\" (UniqueName: \"kubernetes.io/projected/b5b79429-9259-412f-bab8-27865ab7029b-kube-api-access-mv7jd\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795893 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-var-lib-cni-multus\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795995 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-var-lib-kubelet\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.795976 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-var-lib-kubelet\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796092 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcjhj\" (UniqueName: \"kubernetes.io/projected/34d87e8a-cdfb-46ed-97db-2d07cffec516-kube-api-access-jcjhj\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796123 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5b79429-9259-412f-bab8-27865ab7029b-mcd-auth-proxy-config\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796168 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-etc-kubernetes\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796188 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b5b79429-9259-412f-bab8-27865ab7029b-rootfs\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796225 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-cnibin\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796240 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34d87e8a-cdfb-46ed-97db-2d07cffec516-cni-binary-copy\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796258 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-run-netns\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796275 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-hostroot\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796271 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-etc-kubernetes\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796290 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-cnibin\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796294 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-conf-dir\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796327 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-host-run-netns\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796337 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b5b79429-9259-412f-bab8-27865ab7029b-rootfs\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796329 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-conf-dir\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796360 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-hostroot\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796786 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b5b79429-9259-412f-bab8-27865ab7029b-mcd-auth-proxy-config\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796877 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-socket-dir-parent\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.796996 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/34d87e8a-cdfb-46ed-97db-2d07cffec516-multus-daemon-config\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.797125 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/34d87e8a-cdfb-46ed-97db-2d07cffec516-cni-binary-copy\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.798962 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b5b79429-9259-412f-bab8-27865ab7029b-proxy-tls\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.835306 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv7jd\" (UniqueName: \"kubernetes.io/projected/b5b79429-9259-412f-bab8-27865ab7029b-kube-api-access-mv7jd\") pod \"machine-config-daemon-8c5qp\" (UID: \"b5b79429-9259-412f-bab8-27865ab7029b\") " pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.859248 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcjhj\" (UniqueName: \"kubernetes.io/projected/34d87e8a-cdfb-46ed-97db-2d07cffec516-kube-api-access-jcjhj\") pod \"multus-t7gh2\" (UID: \"34d87e8a-cdfb-46ed-97db-2d07cffec516\") " pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.869415 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.892753 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.892801 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.892817 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.892841 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.892858 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:31Z","lastTransitionTime":"2026-01-10T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.908824 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.931303 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.937401 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-t7gh2" Jan 10 06:46:31 crc kubenswrapper[4810]: W0110 06:46:31.954377 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34d87e8a_cdfb_46ed_97db_2d07cffec516.slice/crio-7a15f297d2639a7a81acc1a6727f3258121c7dcee1a8506a8112ae427a5728d7 WatchSource:0}: Error finding container 7a15f297d2639a7a81acc1a6727f3258121c7dcee1a8506a8112ae427a5728d7: Status 404 returned error can't find the container with id 7a15f297d2639a7a81acc1a6727f3258121c7dcee1a8506a8112ae427a5728d7 Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.955318 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.993973 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dwv4g"] Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.994540 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.994727 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.996099 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t4zqb"] Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.996993 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.998373 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.998531 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.998662 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.998933 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:31 crc kubenswrapper[4810]: I0110 06:46:31.999148 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:31Z","lastTransitionTime":"2026-01-10T06:46:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.023645 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.043343 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.064044 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.084276 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.098897 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-openvswitch\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.098931 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-cnibin\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.098950 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-system-cni-dir\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.098986 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099003 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099017 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-netd\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099034 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-netns\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099051 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-ovn\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099112 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/dce51084-e094-437c-a988-66b17982fd5d-kube-api-access-55f7t\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099169 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-slash\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099214 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-kubelet\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099234 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-var-lib-openvswitch\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099261 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-etc-openvswitch\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099283 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l29s7\" (UniqueName: \"kubernetes.io/projected/73649741-6005-4ee3-8b33-7b703540835e-kube-api-access-l29s7\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099303 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-systemd-units\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099323 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-env-overrides\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099396 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099420 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-config\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099461 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/73649741-6005-4ee3-8b33-7b703540835e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099480 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-systemd\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099494 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-log-socket\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099536 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-script-lib\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099552 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-node-log\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099566 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-bin\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099582 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-os-release\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099596 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dce51084-e094-437c-a988-66b17982fd5d-ovn-node-metrics-cert\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.099664 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/73649741-6005-4ee3-8b33-7b703540835e-cni-binary-copy\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.101622 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.101797 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.101900 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.102027 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.102140 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:32Z","lastTransitionTime":"2026-01-10T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.103685 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.123364 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.143248 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.163400 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.184567 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.200500 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.200656 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.200782 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.200846 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-ovn-kubernetes\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.200943 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-netd\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.201067 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-slash\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.201158 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-netns\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.201011 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-netd\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.201426 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-netns\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.201344 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-slash\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.201479 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-ovn\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.201357 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-ovn\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.201732 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/dce51084-e094-437c-a988-66b17982fd5d-kube-api-access-55f7t\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.201832 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-kubelet\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.201925 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-var-lib-openvswitch\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.202013 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-etc-openvswitch\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.202227 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l29s7\" (UniqueName: \"kubernetes.io/projected/73649741-6005-4ee3-8b33-7b703540835e-kube-api-access-l29s7\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.202332 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-systemd-units\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.202495 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-env-overrides\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.202677 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.202849 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.202434 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-etc-openvswitch\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.202406 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-var-lib-openvswitch\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.202284 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-kubelet\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.202460 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-systemd-units\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.203179 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-config\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.203337 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/73649741-6005-4ee3-8b33-7b703540835e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.203479 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-systemd\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.203603 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-log-socket\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.203726 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-node-log\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.203856 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-bin\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.203963 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-systemd\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.204093 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-script-lib\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.204282 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-config\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.204373 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-bin\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.204420 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-log-socket\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.204469 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-node-log\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.203919 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-env-overrides\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.204702 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-os-release\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.204873 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dce51084-e094-437c-a988-66b17982fd5d-ovn-node-metrics-cert\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.205015 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/73649741-6005-4ee3-8b33-7b703540835e-cni-binary-copy\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.205161 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-cnibin\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.205357 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-openvswitch\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.205484 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-system-cni-dir\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.205612 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-script-lib\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.205742 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-system-cni-dir\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.205733 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-openvswitch\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.204883 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-os-release\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.204782 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/73649741-6005-4ee3-8b33-7b703540835e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.205791 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/73649741-6005-4ee3-8b33-7b703540835e-cnibin\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.206713 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/73649741-6005-4ee3-8b33-7b703540835e-cni-binary-copy\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.206813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.206858 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.206875 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.206899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.206916 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:32Z","lastTransitionTime":"2026-01-10T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.213764 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dce51084-e094-437c-a988-66b17982fd5d-ovn-node-metrics-cert\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.215545 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.247523 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/dce51084-e094-437c-a988-66b17982fd5d-kube-api-access-55f7t\") pod \"ovnkube-node-t4zqb\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.264445 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l29s7\" (UniqueName: \"kubernetes.io/projected/73649741-6005-4ee3-8b33-7b703540835e-kube-api-access-l29s7\") pod \"multus-additional-cni-plugins-dwv4g\" (UID: \"73649741-6005-4ee3-8b33-7b703540835e\") " pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.290173 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.309093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.309159 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.309180 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.309231 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.309262 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:32Z","lastTransitionTime":"2026-01-10T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.331037 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.371174 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.411252 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.411292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.411302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.411317 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.411329 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:32Z","lastTransitionTime":"2026-01-10T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.412212 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.456806 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.489316 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.496396 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" Jan 10 06:46:32 crc kubenswrapper[4810]: W0110 06:46:32.506420 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73649741_6005_4ee3_8b33_7b703540835e.slice/crio-b1211c401a3e160320931fd0866d81853b408d20d712d37a01d1960e7d3cea34 WatchSource:0}: Error finding container b1211c401a3e160320931fd0866d81853b408d20d712d37a01d1960e7d3cea34: Status 404 returned error can't find the container with id b1211c401a3e160320931fd0866d81853b408d20d712d37a01d1960e7d3cea34 Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.515029 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.515069 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.515078 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.515092 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.515101 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:32Z","lastTransitionTime":"2026-01-10T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.530516 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.542023 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.569863 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.610049 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.621799 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.621873 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.621898 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.621928 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.621949 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:32Z","lastTransitionTime":"2026-01-10T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.649961 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: W0110 06:46:32.684685 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddce51084_e094_437c_a988_66b17982fd5d.slice/crio-6b7542e1b9f9cf82c1c9591d2fcfc6a66c34419eccc8b54cd40234db5c0676c4 WatchSource:0}: Error finding container 6b7542e1b9f9cf82c1c9591d2fcfc6a66c34419eccc8b54cd40234db5c0676c4: Status 404 returned error can't find the container with id 6b7542e1b9f9cf82c1c9591d2fcfc6a66c34419eccc8b54cd40234db5c0676c4 Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.691887 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.692147 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:32 crc kubenswrapper[4810]: E0110 06:46:32.692490 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.692676 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:32 crc kubenswrapper[4810]: E0110 06:46:32.692761 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:32 crc kubenswrapper[4810]: E0110 06:46:32.693091 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.693518 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.724818 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.724858 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.724871 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.724889 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.724901 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:32Z","lastTransitionTime":"2026-01-10T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.749297 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.772521 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.810584 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.822465 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7gh2" event={"ID":"34d87e8a-cdfb-46ed-97db-2d07cffec516","Type":"ContainerStarted","Data":"90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.822530 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7gh2" event={"ID":"34d87e8a-cdfb-46ed-97db-2d07cffec516","Type":"ContainerStarted","Data":"7a15f297d2639a7a81acc1a6727f3258121c7dcee1a8506a8112ae427a5728d7"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.824818 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.824873 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.824884 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"bda6fc475f4abdce49aa25b9f8ca3da94cc8ab76579c3db6d0201fae2f046a9a"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.826381 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be" exitCode=0 Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.826406 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.826428 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"6b7542e1b9f9cf82c1c9591d2fcfc6a66c34419eccc8b54cd40234db5c0676c4"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.826687 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.826783 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.826870 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.826964 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.827113 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:32Z","lastTransitionTime":"2026-01-10T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.828044 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" event={"ID":"73649741-6005-4ee3-8b33-7b703540835e","Type":"ContainerStarted","Data":"b1211c401a3e160320931fd0866d81853b408d20d712d37a01d1960e7d3cea34"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.848251 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.889400 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.930006 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.930520 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.930644 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.930726 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.930806 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.930895 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:32Z","lastTransitionTime":"2026-01-10T06:46:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:32 crc kubenswrapper[4810]: I0110 06:46:32.971786 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.009320 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.035389 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.035684 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.035701 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.035729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.035745 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:33Z","lastTransitionTime":"2026-01-10T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.053745 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.079556 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lmtrv"] Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.079955 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lmtrv" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.095896 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.103053 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.123594 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.138642 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.138695 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.138714 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.138739 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.138759 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:33Z","lastTransitionTime":"2026-01-10T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.143033 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.163868 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.212810 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.215351 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/856b0311-fbd9-44d8-ab6a-e5e93843ba75-host\") pod \"node-ca-lmtrv\" (UID: \"856b0311-fbd9-44d8-ab6a-e5e93843ba75\") " pod="openshift-image-registry/node-ca-lmtrv" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.215417 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxxlt\" (UniqueName: \"kubernetes.io/projected/856b0311-fbd9-44d8-ab6a-e5e93843ba75-kube-api-access-zxxlt\") pod \"node-ca-lmtrv\" (UID: \"856b0311-fbd9-44d8-ab6a-e5e93843ba75\") " pod="openshift-image-registry/node-ca-lmtrv" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.215690 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/856b0311-fbd9-44d8-ab6a-e5e93843ba75-serviceca\") pod \"node-ca-lmtrv\" (UID: \"856b0311-fbd9-44d8-ab6a-e5e93843ba75\") " pod="openshift-image-registry/node-ca-lmtrv" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.241923 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.242002 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.242017 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.242034 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.242047 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:33Z","lastTransitionTime":"2026-01-10T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.261484 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.291276 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.316358 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/856b0311-fbd9-44d8-ab6a-e5e93843ba75-host\") pod \"node-ca-lmtrv\" (UID: \"856b0311-fbd9-44d8-ab6a-e5e93843ba75\") " pod="openshift-image-registry/node-ca-lmtrv" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.316417 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxxlt\" (UniqueName: \"kubernetes.io/projected/856b0311-fbd9-44d8-ab6a-e5e93843ba75-kube-api-access-zxxlt\") pod \"node-ca-lmtrv\" (UID: \"856b0311-fbd9-44d8-ab6a-e5e93843ba75\") " pod="openshift-image-registry/node-ca-lmtrv" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.316473 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/856b0311-fbd9-44d8-ab6a-e5e93843ba75-host\") pod \"node-ca-lmtrv\" (UID: \"856b0311-fbd9-44d8-ab6a-e5e93843ba75\") " pod="openshift-image-registry/node-ca-lmtrv" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.316485 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/856b0311-fbd9-44d8-ab6a-e5e93843ba75-serviceca\") pod \"node-ca-lmtrv\" (UID: \"856b0311-fbd9-44d8-ab6a-e5e93843ba75\") " pod="openshift-image-registry/node-ca-lmtrv" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.318355 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/856b0311-fbd9-44d8-ab6a-e5e93843ba75-serviceca\") pod \"node-ca-lmtrv\" (UID: \"856b0311-fbd9-44d8-ab6a-e5e93843ba75\") " pod="openshift-image-registry/node-ca-lmtrv" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.329093 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.346244 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.346305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.346323 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.346353 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.346373 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:33Z","lastTransitionTime":"2026-01-10T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.366639 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxxlt\" (UniqueName: \"kubernetes.io/projected/856b0311-fbd9-44d8-ab6a-e5e93843ba75-kube-api-access-zxxlt\") pod \"node-ca-lmtrv\" (UID: \"856b0311-fbd9-44d8-ab6a-e5e93843ba75\") " pod="openshift-image-registry/node-ca-lmtrv" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.394949 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.449243 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.449289 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.449307 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.449333 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.449352 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:33Z","lastTransitionTime":"2026-01-10T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.449313 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.474371 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.511550 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.519032 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.519232 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.519337 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.519340 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:46:37.519298315 +0000 UTC m=+26.134791238 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.519463 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:37.519430288 +0000 UTC m=+26.134923201 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.552878 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.552926 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.552938 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.552958 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.552970 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:33Z","lastTransitionTime":"2026-01-10T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.554865 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.591097 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.620877 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.621153 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.621229 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.621250 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.621159 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.621328 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:37.621298702 +0000 UTC m=+26.236791615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.621352 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.621383 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.621402 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.621452 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.621485 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:37.621450665 +0000 UTC m=+26.236943578 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.621603 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:33 crc kubenswrapper[4810]: E0110 06:46:33.621676 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:37.62165517 +0000 UTC m=+26.237148163 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.634074 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.655880 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.655936 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.655953 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.655980 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.655997 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:33Z","lastTransitionTime":"2026-01-10T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.663820 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lmtrv" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.680487 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: W0110 06:46:33.681420 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod856b0311_fbd9_44d8_ab6a_e5e93843ba75.slice/crio-e8161c0f29828649e06310f0923fe2301bbcf3e852c59056f6e206c0accd64d2 WatchSource:0}: Error finding container e8161c0f29828649e06310f0923fe2301bbcf3e852c59056f6e206c0accd64d2: Status 404 returned error can't find the container with id e8161c0f29828649e06310f0923fe2301bbcf3e852c59056f6e206c0accd64d2 Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.715279 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.753899 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.760493 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.760769 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.761004 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.761255 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.761468 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:33Z","lastTransitionTime":"2026-01-10T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.804857 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.833302 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.847535 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lmtrv" event={"ID":"856b0311-fbd9-44d8-ab6a-e5e93843ba75","Type":"ContainerStarted","Data":"e8161c0f29828649e06310f0923fe2301bbcf3e852c59056f6e206c0accd64d2"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.851743 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.851802 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.854461 4810 generic.go:334] "Generic (PLEG): container finished" podID="73649741-6005-4ee3-8b33-7b703540835e" containerID="2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673" exitCode=0 Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.854509 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" event={"ID":"73649741-6005-4ee3-8b33-7b703540835e","Type":"ContainerDied","Data":"2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.864829 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.865086 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.865099 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.865154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.865170 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:33Z","lastTransitionTime":"2026-01-10T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.877183 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.960963 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.966850 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.966896 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.966909 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.966929 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.966942 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:33Z","lastTransitionTime":"2026-01-10T06:46:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.973509 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:33 crc kubenswrapper[4810]: I0110 06:46:33.988248 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.029112 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.070274 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.070868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.070908 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.070921 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.070945 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.070957 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:34Z","lastTransitionTime":"2026-01-10T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.115073 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.150812 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.173790 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.173846 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.173868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.173892 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.173909 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:34Z","lastTransitionTime":"2026-01-10T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.189235 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.229121 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.267401 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.276159 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.276215 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.276229 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.276244 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.276253 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:34Z","lastTransitionTime":"2026-01-10T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.312386 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.352154 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.378987 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.379056 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.379085 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.379115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.379137 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:34Z","lastTransitionTime":"2026-01-10T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.393782 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.429607 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.472407 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.482432 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.482483 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.482500 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.482526 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.482545 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:34Z","lastTransitionTime":"2026-01-10T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.512866 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.578727 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.586582 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.586641 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.586658 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.586683 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.586700 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:34Z","lastTransitionTime":"2026-01-10T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.598308 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.644736 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.674522 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.689833 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.689887 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.689905 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.689927 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.689978 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:34Z","lastTransitionTime":"2026-01-10T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.692441 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.692497 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:34 crc kubenswrapper[4810]: E0110 06:46:34.692621 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.692642 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:34 crc kubenswrapper[4810]: E0110 06:46:34.692805 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:34 crc kubenswrapper[4810]: E0110 06:46:34.692957 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.718474 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.753353 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.790227 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.793648 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.793711 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.793730 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.793753 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.793773 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:34Z","lastTransitionTime":"2026-01-10T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.846557 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.860578 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lmtrv" event={"ID":"856b0311-fbd9-44d8-ab6a-e5e93843ba75","Type":"ContainerStarted","Data":"3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.866317 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.866365 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.866379 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.866390 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.869041 4810 generic.go:334] "Generic (PLEG): container finished" podID="73649741-6005-4ee3-8b33-7b703540835e" containerID="93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc" exitCode=0 Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.869078 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" event={"ID":"73649741-6005-4ee3-8b33-7b703540835e","Type":"ContainerDied","Data":"93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.872100 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.896431 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.896495 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.896519 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.896551 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.896574 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:34Z","lastTransitionTime":"2026-01-10T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.915233 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.950912 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.993312 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.999496 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.999546 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.999559 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.999577 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:34 crc kubenswrapper[4810]: I0110 06:46:34.999589 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:34Z","lastTransitionTime":"2026-01-10T06:46:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.032021 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.078314 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.101994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.102016 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.102023 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.102037 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.102045 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:35Z","lastTransitionTime":"2026-01-10T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.108706 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.149471 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.189234 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.205346 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.205391 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.205404 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.205421 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.205435 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:35Z","lastTransitionTime":"2026-01-10T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.229536 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.287228 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.308143 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.308239 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.308260 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.308389 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.308425 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:35Z","lastTransitionTime":"2026-01-10T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.308879 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.351720 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.389424 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.411806 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.411898 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.411943 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.411978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.412001 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:35Z","lastTransitionTime":"2026-01-10T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.431446 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.474499 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.514773 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.514811 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.514823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.514845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.514858 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:35Z","lastTransitionTime":"2026-01-10T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.617988 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.618043 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.618063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.618092 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.618113 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:35Z","lastTransitionTime":"2026-01-10T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.721671 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.721725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.721742 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.721765 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.721782 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:35Z","lastTransitionTime":"2026-01-10T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.824610 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.824689 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.824706 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.824729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.824745 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:35Z","lastTransitionTime":"2026-01-10T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.883267 4810 generic.go:334] "Generic (PLEG): container finished" podID="73649741-6005-4ee3-8b33-7b703540835e" containerID="d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96" exitCode=0 Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.883375 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" event={"ID":"73649741-6005-4ee3-8b33-7b703540835e","Type":"ContainerDied","Data":"d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96"} Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.916004 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.928341 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.928394 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.928415 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.928443 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.928465 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:35Z","lastTransitionTime":"2026-01-10T06:46:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.938433 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.954879 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.965863 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.976294 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.989825 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:35 crc kubenswrapper[4810]: I0110 06:46:35.999891 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.013828 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.025302 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.031780 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.031817 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.031829 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.031845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.031857 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:36Z","lastTransitionTime":"2026-01-10T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.034976 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.045107 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.055469 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.070439 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.083279 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.110106 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.134967 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.135025 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.135045 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.135069 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.135086 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:36Z","lastTransitionTime":"2026-01-10T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.238401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.238452 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.238468 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.238491 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.238511 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:36Z","lastTransitionTime":"2026-01-10T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.341911 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.341955 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.341972 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.341994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.342010 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:36Z","lastTransitionTime":"2026-01-10T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.445171 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.445287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.445338 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.445370 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.445390 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:36Z","lastTransitionTime":"2026-01-10T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.548597 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.548645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.548658 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.548676 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.548687 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:36Z","lastTransitionTime":"2026-01-10T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.651586 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.651645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.651662 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.651686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.651702 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:36Z","lastTransitionTime":"2026-01-10T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.692014 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.692106 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.692112 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:36 crc kubenswrapper[4810]: E0110 06:46:36.692283 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:36 crc kubenswrapper[4810]: E0110 06:46:36.692423 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:36 crc kubenswrapper[4810]: E0110 06:46:36.692538 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.754700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.754767 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.754791 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.754823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.754844 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:36Z","lastTransitionTime":"2026-01-10T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.858053 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.858110 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.858128 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.858150 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.858166 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:36Z","lastTransitionTime":"2026-01-10T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.893177 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.897409 4810 generic.go:334] "Generic (PLEG): container finished" podID="73649741-6005-4ee3-8b33-7b703540835e" containerID="c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e" exitCode=0 Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.897462 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" event={"ID":"73649741-6005-4ee3-8b33-7b703540835e","Type":"ContainerDied","Data":"c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.912316 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.923911 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.958537 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.962305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.962361 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.962381 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.962407 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.962425 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:36Z","lastTransitionTime":"2026-01-10T06:46:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.978027 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:36 crc kubenswrapper[4810]: I0110 06:46:36.988801 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.002392 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.016479 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.032448 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.043003 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.052841 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.109781 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.109820 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.109828 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.109842 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.109851 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:37Z","lastTransitionTime":"2026-01-10T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.110699 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.121518 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.145550 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.159738 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.171257 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.211567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.211638 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.211661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.211705 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.211726 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:37Z","lastTransitionTime":"2026-01-10T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.314957 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.315014 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.315032 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.315058 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.315076 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:37Z","lastTransitionTime":"2026-01-10T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.417643 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.417687 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.417703 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.417730 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.417748 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:37Z","lastTransitionTime":"2026-01-10T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.521037 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.521115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.521140 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.521188 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.521245 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:37Z","lastTransitionTime":"2026-01-10T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.563969 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.564164 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.564341 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.564447 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:45.564420963 +0000 UTC m=+34.179913886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.564705 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:46:45.564667348 +0000 UTC m=+34.180160261 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.623943 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.624024 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.624053 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.624084 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.624105 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:37Z","lastTransitionTime":"2026-01-10T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.664908 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.664975 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.665009 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.665188 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.665258 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.665275 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.665333 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:45.665313034 +0000 UTC m=+34.280805957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.665538 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.665577 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.665592 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.665602 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.665676 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:45.665638962 +0000 UTC m=+34.281131935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:37 crc kubenswrapper[4810]: E0110 06:46:37.665722 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:46:45.665694574 +0000 UTC m=+34.281187497 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.728145 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.728231 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.728249 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.728269 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.728287 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:37Z","lastTransitionTime":"2026-01-10T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.831791 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.831884 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.831913 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.831944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.831967 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:37Z","lastTransitionTime":"2026-01-10T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.911994 4810 generic.go:334] "Generic (PLEG): container finished" podID="73649741-6005-4ee3-8b33-7b703540835e" containerID="845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549" exitCode=0 Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.912069 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" event={"ID":"73649741-6005-4ee3-8b33-7b703540835e","Type":"ContainerDied","Data":"845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549"} Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.930790 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.936642 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.936698 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.936717 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.936741 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.936759 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:37Z","lastTransitionTime":"2026-01-10T06:46:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.949472 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.968619 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:37 crc kubenswrapper[4810]: I0110 06:46:37.986414 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.003054 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.019426 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.032589 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.038981 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.039038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.039095 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.039135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.039164 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:38Z","lastTransitionTime":"2026-01-10T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.043671 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.065267 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.083692 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.100422 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.111538 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.120354 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.129672 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.140614 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.142169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.142250 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.142271 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.142295 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.142313 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:38Z","lastTransitionTime":"2026-01-10T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.244572 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.244623 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.244640 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.244663 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.244677 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:38Z","lastTransitionTime":"2026-01-10T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.349184 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.349239 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.349250 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.349265 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.349277 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:38Z","lastTransitionTime":"2026-01-10T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.452687 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.452743 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.452759 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.452780 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.452797 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:38Z","lastTransitionTime":"2026-01-10T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.556074 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.556138 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.556156 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.556181 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.556232 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:38Z","lastTransitionTime":"2026-01-10T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.591725 4810 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.660457 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.660803 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.660823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.660848 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.660866 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:38Z","lastTransitionTime":"2026-01-10T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.691986 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.692022 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.692087 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:38 crc kubenswrapper[4810]: E0110 06:46:38.692312 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:38 crc kubenswrapper[4810]: E0110 06:46:38.692443 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:38 crc kubenswrapper[4810]: E0110 06:46:38.692653 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.763829 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.763877 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.763894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.763918 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.763976 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:38Z","lastTransitionTime":"2026-01-10T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.866968 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.867015 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.867033 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.867056 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.867073 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:38Z","lastTransitionTime":"2026-01-10T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.918468 4810 generic.go:334] "Generic (PLEG): container finished" podID="73649741-6005-4ee3-8b33-7b703540835e" containerID="d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332" exitCode=0 Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.918541 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" event={"ID":"73649741-6005-4ee3-8b33-7b703540835e","Type":"ContainerDied","Data":"d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.929476 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.930604 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.931483 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.931554 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.943690 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.957128 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.961288 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.965069 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.968014 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.970645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.970677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.970685 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.970700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.970709 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:38Z","lastTransitionTime":"2026-01-10T06:46:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.980495 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:38 crc kubenswrapper[4810]: I0110 06:46:38.991332 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.000686 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.010108 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.023806 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.042966 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.064498 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.073338 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.073390 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.073405 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.073422 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.073434 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:39Z","lastTransitionTime":"2026-01-10T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.093859 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.113806 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.128016 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.140248 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.155764 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.165614 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.173908 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.182559 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.182604 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.182615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.182632 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.182644 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:39Z","lastTransitionTime":"2026-01-10T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.194225 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.203964 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.229558 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.250323 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.268039 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.282287 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.285839 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.286010 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.286126 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.286264 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.286351 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:39Z","lastTransitionTime":"2026-01-10T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.293671 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.304946 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.319660 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.332790 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.342675 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.351185 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.389436 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.389491 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.389512 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.389536 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.389557 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:39Z","lastTransitionTime":"2026-01-10T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.493351 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.493404 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.493422 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.493449 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.493468 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:39Z","lastTransitionTime":"2026-01-10T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.596177 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.596282 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.596298 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.596322 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.596338 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:39Z","lastTransitionTime":"2026-01-10T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.699136 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.699779 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.700038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.700336 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.700695 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:39Z","lastTransitionTime":"2026-01-10T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.804807 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.804873 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.804890 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.804914 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.804935 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:39Z","lastTransitionTime":"2026-01-10T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.907902 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.907951 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.907969 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.907995 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.908014 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:39Z","lastTransitionTime":"2026-01-10T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.970979 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.971037 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.971053 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.971081 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.971100 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:39Z","lastTransitionTime":"2026-01-10T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:39 crc kubenswrapper[4810]: E0110 06:46:39.985543 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.989845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.989884 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.989898 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.989915 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:39 crc kubenswrapper[4810]: I0110 06:46:39.989928 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:39Z","lastTransitionTime":"2026-01-10T06:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: E0110 06:46:40.001070 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.005867 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.005919 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.005937 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.005963 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.005979 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: E0110 06:46:40.021572 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.026102 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.026151 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.026172 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.026235 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.026290 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.040777 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 10 06:46:40 crc kubenswrapper[4810]: E0110 06:46:40.042808 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.056330 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.056690 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.057139 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.057494 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.057809 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: E0110 06:46:40.070293 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:40 crc kubenswrapper[4810]: E0110 06:46:40.070621 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.074984 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.075047 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.075068 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.075093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.075112 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.178458 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.178833 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.178846 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.178865 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.178880 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.282906 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.282961 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.282975 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.282995 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.283012 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.386251 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.386327 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.386348 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.386377 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.386395 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.488969 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.489115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.489250 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.489382 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.489486 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.592038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.592105 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.592118 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.592136 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.592148 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.692446 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:40 crc kubenswrapper[4810]: E0110 06:46:40.692629 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.692454 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:40 crc kubenswrapper[4810]: E0110 06:46:40.692894 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.693045 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:40 crc kubenswrapper[4810]: E0110 06:46:40.693294 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.696998 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.697320 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.697439 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.698226 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.698352 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.801639 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.801683 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.801700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.801722 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.801741 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.914439 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.914489 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.914514 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.914541 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:40 crc kubenswrapper[4810]: I0110 06:46:40.914563 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:40Z","lastTransitionTime":"2026-01-10T06:46:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.017621 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.017663 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.017675 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.017692 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.017703 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:41Z","lastTransitionTime":"2026-01-10T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.050112 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" event={"ID":"73649741-6005-4ee3-8b33-7b703540835e","Type":"ContainerStarted","Data":"21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb"} Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.050236 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.078164 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.094007 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.107721 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.118942 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.120990 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.121064 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.121088 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.121118 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.121144 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:41Z","lastTransitionTime":"2026-01-10T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.130806 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.142118 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.159264 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.173509 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.189088 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.203858 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.218367 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.224050 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.224109 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.224127 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.224153 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.224173 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:41Z","lastTransitionTime":"2026-01-10T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.232902 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.245085 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.256707 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.273639 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.327074 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.327375 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.327423 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.327450 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.327470 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:41Z","lastTransitionTime":"2026-01-10T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.430084 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.430138 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.430156 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.430179 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.430225 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:41Z","lastTransitionTime":"2026-01-10T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.533392 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.533446 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.533463 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.533483 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.533499 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:41Z","lastTransitionTime":"2026-01-10T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.636242 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.636303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.636321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.636346 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.636369 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:41Z","lastTransitionTime":"2026-01-10T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.709810 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.725400 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.739479 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.739535 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.739552 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.739575 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.739598 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:41Z","lastTransitionTime":"2026-01-10T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.741854 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.757592 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.773022 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.790306 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.803992 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.820473 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.832216 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.841555 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.841622 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.841635 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.841667 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.841680 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:41Z","lastTransitionTime":"2026-01-10T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.858328 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.882933 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.899869 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.912329 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.923625 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.934819 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.944003 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.944074 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.944094 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.944126 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:41 crc kubenswrapper[4810]: I0110 06:46:41.944146 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:41Z","lastTransitionTime":"2026-01-10T06:46:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.048336 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.049359 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.049431 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.049466 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.049487 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:42Z","lastTransitionTime":"2026-01-10T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.151753 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.151816 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.152280 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.152327 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.152349 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:42Z","lastTransitionTime":"2026-01-10T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.255778 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.255837 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.255856 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.255881 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.255900 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:42Z","lastTransitionTime":"2026-01-10T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.359279 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.359344 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.359363 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.359389 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.359406 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:42Z","lastTransitionTime":"2026-01-10T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.461905 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.461946 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.461957 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.461992 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.462006 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:42Z","lastTransitionTime":"2026-01-10T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.565340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.565394 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.565412 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.565440 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.565458 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:42Z","lastTransitionTime":"2026-01-10T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.668749 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.668824 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.668848 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.668877 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.668899 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:42Z","lastTransitionTime":"2026-01-10T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.692669 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.692701 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.692926 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:42 crc kubenswrapper[4810]: E0110 06:46:42.693101 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:42 crc kubenswrapper[4810]: E0110 06:46:42.693263 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:42 crc kubenswrapper[4810]: E0110 06:46:42.693392 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.772392 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.772459 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.772481 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.772504 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.772521 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:42Z","lastTransitionTime":"2026-01-10T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.875722 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.875782 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.875804 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.875832 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.875851 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:42Z","lastTransitionTime":"2026-01-10T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.978034 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.978085 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.978103 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.978129 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:42 crc kubenswrapper[4810]: I0110 06:46:42.978145 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:42Z","lastTransitionTime":"2026-01-10T06:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.080305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.080343 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.080352 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.080365 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.080375 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:43Z","lastTransitionTime":"2026-01-10T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.183693 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.183755 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.183773 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.183796 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.183813 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:43Z","lastTransitionTime":"2026-01-10T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.280692 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj"] Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.281628 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.287858 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.287923 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.287946 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.287978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.288002 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:43Z","lastTransitionTime":"2026-01-10T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.290385 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.292596 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.302637 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.319797 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.335888 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.351064 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.366853 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.379181 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.391502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.391565 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.391586 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.391616 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.391635 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:43Z","lastTransitionTime":"2026-01-10T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.393572 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.404275 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.414352 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.425724 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.437312 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d25e174-61f6-4b97-8e20-dcb9d255f116-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.437398 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7p6n\" (UniqueName: \"kubernetes.io/projected/7d25e174-61f6-4b97-8e20-dcb9d255f116-kube-api-access-f7p6n\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.437516 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d25e174-61f6-4b97-8e20-dcb9d255f116-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.437554 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d25e174-61f6-4b97-8e20-dcb9d255f116-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.440487 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.463763 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.491870 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.498509 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.498591 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.498616 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.498648 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.498672 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:43Z","lastTransitionTime":"2026-01-10T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.514538 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.533523 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.538540 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d25e174-61f6-4b97-8e20-dcb9d255f116-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.538629 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7p6n\" (UniqueName: \"kubernetes.io/projected/7d25e174-61f6-4b97-8e20-dcb9d255f116-kube-api-access-f7p6n\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.538721 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d25e174-61f6-4b97-8e20-dcb9d255f116-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.538997 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d25e174-61f6-4b97-8e20-dcb9d255f116-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.539850 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7d25e174-61f6-4b97-8e20-dcb9d255f116-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.540097 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7d25e174-61f6-4b97-8e20-dcb9d255f116-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.547043 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.549924 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7d25e174-61f6-4b97-8e20-dcb9d255f116-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.567403 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7p6n\" (UniqueName: \"kubernetes.io/projected/7d25e174-61f6-4b97-8e20-dcb9d255f116-kube-api-access-f7p6n\") pod \"ovnkube-control-plane-749d76644c-7zmfj\" (UID: \"7d25e174-61f6-4b97-8e20-dcb9d255f116\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.602328 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.602388 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.602405 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.602429 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.602447 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:43Z","lastTransitionTime":"2026-01-10T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.603548 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" Jan 10 06:46:43 crc kubenswrapper[4810]: W0110 06:46:43.622567 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d25e174_61f6_4b97_8e20_dcb9d255f116.slice/crio-ed1c646be106699b595fca5db496f6bf0f93c4d9f56f2d38a425b1f95f37ca64 WatchSource:0}: Error finding container ed1c646be106699b595fca5db496f6bf0f93c4d9f56f2d38a425b1f95f37ca64: Status 404 returned error can't find the container with id ed1c646be106699b595fca5db496f6bf0f93c4d9f56f2d38a425b1f95f37ca64 Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.705517 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.705556 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.705569 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.705585 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.705597 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:43Z","lastTransitionTime":"2026-01-10T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.808125 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.808164 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.808174 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.808189 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.808216 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:43Z","lastTransitionTime":"2026-01-10T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.910805 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.910833 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.910842 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.910854 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:43 crc kubenswrapper[4810]: I0110 06:46:43.910863 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:43Z","lastTransitionTime":"2026-01-10T06:46:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.014761 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.014832 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.014855 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.014886 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.014910 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:44Z","lastTransitionTime":"2026-01-10T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.060684 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" event={"ID":"7d25e174-61f6-4b97-8e20-dcb9d255f116","Type":"ContainerStarted","Data":"ed1c646be106699b595fca5db496f6bf0f93c4d9f56f2d38a425b1f95f37ca64"} Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.117625 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.117677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.117688 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.117715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.117731 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:44Z","lastTransitionTime":"2026-01-10T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.220028 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.220083 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.220099 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.220126 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.220143 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:44Z","lastTransitionTime":"2026-01-10T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.322147 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.322468 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.322480 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.322498 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.322510 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:44Z","lastTransitionTime":"2026-01-10T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.377448 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9nv84"] Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.377902 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:44 crc kubenswrapper[4810]: E0110 06:46:44.377966 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.393790 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.405235 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.415712 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.426077 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.426130 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.426148 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.426170 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.426080 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.426188 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:44Z","lastTransitionTime":"2026-01-10T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.434268 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.442531 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.450146 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg7n6\" (UniqueName: \"kubernetes.io/projected/6741fd18-31c0-4bc3-be74-c0f6080c67af-kube-api-access-rg7n6\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.450384 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.456435 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.468457 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.479815 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.494980 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.506690 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.521261 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.529819 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.529871 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.529884 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.529901 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.529913 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:44Z","lastTransitionTime":"2026-01-10T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.550013 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.551518 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.551605 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg7n6\" (UniqueName: \"kubernetes.io/projected/6741fd18-31c0-4bc3-be74-c0f6080c67af-kube-api-access-rg7n6\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:44 crc kubenswrapper[4810]: E0110 06:46:44.551787 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:46:44 crc kubenswrapper[4810]: E0110 06:46:44.551891 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs podName:6741fd18-31c0-4bc3-be74-c0f6080c67af nodeName:}" failed. No retries permitted until 2026-01-10 06:46:45.051869612 +0000 UTC m=+33.667362505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs") pod "network-metrics-daemon-9nv84" (UID: "6741fd18-31c0-4bc3-be74-c0f6080c67af") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.564590 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.573817 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg7n6\" (UniqueName: \"kubernetes.io/projected/6741fd18-31c0-4bc3-be74-c0f6080c67af-kube-api-access-rg7n6\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.575254 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.588064 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.604419 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.737814 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:44 crc kubenswrapper[4810]: E0110 06:46:44.737988 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.737829 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:44 crc kubenswrapper[4810]: E0110 06:46:44.738109 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.738143 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:44 crc kubenswrapper[4810]: E0110 06:46:44.738372 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.742230 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.742322 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.742345 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.742368 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.742426 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:44Z","lastTransitionTime":"2026-01-10T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.846100 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.846149 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.846180 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.846239 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.846253 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:44Z","lastTransitionTime":"2026-01-10T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.949189 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.949234 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.949245 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.949257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:44 crc kubenswrapper[4810]: I0110 06:46:44.949266 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:44Z","lastTransitionTime":"2026-01-10T06:46:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.051825 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.052059 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.052079 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.052108 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.052133 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:45Z","lastTransitionTime":"2026-01-10T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.065371 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.067186 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-w95z8" event={"ID":"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073","Type":"ContainerStarted","Data":"9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.070020 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.070047 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.072266 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" event={"ID":"7d25e174-61f6-4b97-8e20-dcb9d255f116","Type":"ContainerStarted","Data":"9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.072342 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" event={"ID":"7d25e174-61f6-4b97-8e20-dcb9d255f116","Type":"ContainerStarted","Data":"c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.099387 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.115301 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.132679 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.143507 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.143838 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.144799 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs podName:6741fd18-31c0-4bc3-be74-c0f6080c67af nodeName:}" failed. No retries permitted until 2026-01-10 06:46:46.144774358 +0000 UTC m=+34.760267281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs") pod "network-metrics-daemon-9nv84" (UID: "6741fd18-31c0-4bc3-be74-c0f6080c67af") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.149215 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.154982 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.155025 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.155041 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.155063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.155080 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:45Z","lastTransitionTime":"2026-01-10T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.168936 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.185096 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.207634 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.223288 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.239538 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.253102 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.257165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.257219 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.257228 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.257242 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.257254 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:45Z","lastTransitionTime":"2026-01-10T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.262797 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.287528 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.310857 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.329048 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.359602 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.359657 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.359675 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.359702 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.359725 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:45Z","lastTransitionTime":"2026-01-10T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.375600 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.393290 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.409868 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.423115 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.438869 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.451528 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.461771 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.461803 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.461812 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.461825 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.461834 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:45Z","lastTransitionTime":"2026-01-10T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.465097 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.484103 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.503975 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.526333 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.541744 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.559220 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.563445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.563476 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.563485 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.563498 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.563516 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:45Z","lastTransitionTime":"2026-01-10T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.569670 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.579880 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.594705 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.605906 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.619895 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.635132 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.643646 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.651400 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.651508 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.651585 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:47:01.651559709 +0000 UTC m=+50.267052592 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.651589 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.651640 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:47:01.65163397 +0000 UTC m=+50.267126853 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.655563 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:45Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.665532 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.665589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.665599 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.665635 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.665645 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:45Z","lastTransitionTime":"2026-01-10T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.692838 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.693012 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.752769 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.752820 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.752839 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.752898 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.752925 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.752938 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.752947 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.752963 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:47:01.752946022 +0000 UTC m=+50.368438915 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.752981 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-10 06:47:01.752973633 +0000 UTC m=+50.368466526 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.752997 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.753007 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.753013 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:45 crc kubenswrapper[4810]: E0110 06:46:45.753035 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-10 06:47:01.753025884 +0000 UTC m=+50.368518767 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.767382 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.767421 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.767432 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.767450 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.767462 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:45Z","lastTransitionTime":"2026-01-10T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.870171 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.870228 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.870246 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.870267 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.870282 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:45Z","lastTransitionTime":"2026-01-10T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.972823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.972863 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.972876 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.972893 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:45 crc kubenswrapper[4810]: I0110 06:46:45.972903 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:45Z","lastTransitionTime":"2026-01-10T06:46:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.074604 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.074637 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.074645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.074658 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.074668 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:46Z","lastTransitionTime":"2026-01-10T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.156653 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:46 crc kubenswrapper[4810]: E0110 06:46:46.156852 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:46:46 crc kubenswrapper[4810]: E0110 06:46:46.156965 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs podName:6741fd18-31c0-4bc3-be74-c0f6080c67af nodeName:}" failed. No retries permitted until 2026-01-10 06:46:48.156932787 +0000 UTC m=+36.772425720 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs") pod "network-metrics-daemon-9nv84" (UID: "6741fd18-31c0-4bc3-be74-c0f6080c67af") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.177375 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.177431 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.177449 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.177469 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.177485 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:46Z","lastTransitionTime":"2026-01-10T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.280623 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.280685 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.280724 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.280751 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.280770 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:46Z","lastTransitionTime":"2026-01-10T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.383568 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.384098 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.384123 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.384157 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.384183 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:46Z","lastTransitionTime":"2026-01-10T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.487739 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.487782 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.487793 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.487810 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.487821 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:46Z","lastTransitionTime":"2026-01-10T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.590110 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.590172 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.590190 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.590241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.590262 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:46Z","lastTransitionTime":"2026-01-10T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.691931 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.691979 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.691977 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.692070 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:46 crc kubenswrapper[4810]: E0110 06:46:46.692095 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.692127 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.692149 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.692177 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.692243 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:46Z","lastTransitionTime":"2026-01-10T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:46 crc kubenswrapper[4810]: E0110 06:46:46.692307 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:46 crc kubenswrapper[4810]: E0110 06:46:46.692413 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.795336 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.795399 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.795415 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.795442 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.795459 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:46Z","lastTransitionTime":"2026-01-10T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.898654 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.898722 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.898740 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.898765 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:46 crc kubenswrapper[4810]: I0110 06:46:46.898782 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:46Z","lastTransitionTime":"2026-01-10T06:46:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.001372 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.001427 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.001445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.001467 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.001484 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:47Z","lastTransitionTime":"2026-01-10T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.081633 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/0.log" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.087727 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea" exitCode=1 Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.087786 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea"} Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.088935 4810 scope.go:117] "RemoveContainer" containerID="6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.105142 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.105269 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.105298 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.105356 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.105383 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:47Z","lastTransitionTime":"2026-01-10T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.110017 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.144583 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.164710 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.185004 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.201846 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.208509 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.208576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.208598 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.208630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.208657 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:47Z","lastTransitionTime":"2026-01-10T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.217103 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.234295 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.285212 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.307690 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.312623 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.312685 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.312762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.312835 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.312864 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:47Z","lastTransitionTime":"2026-01-10T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.328213 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.348404 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.369137 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.393844 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.415234 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.415268 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.415278 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.415292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.415301 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:47Z","lastTransitionTime":"2026-01-10T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.424759 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:46Z\\\",\\\"message\\\":\\\"ce (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0110 06:46:46.014069 5976 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0110 06:46:46.014236 5976 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0110 06:46:46.014239 5976 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0110 06:46:46.014725 5976 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0110 06:46:46.014841 5976 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:46.014864 5976 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0110 06:46:46.014869 5976 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0110 06:46:46.014898 5976 factory.go:656] Stopping watch factory\\\\nI0110 06:46:46.014897 5976 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:46.014913 5976 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0110 06:46:46.014987 5976 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:46.014949 5976 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.442028 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.455530 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.467476 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.518223 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.518536 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.518549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.518566 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.518578 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:47Z","lastTransitionTime":"2026-01-10T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.621445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.621499 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.621519 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.621544 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.621559 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:47Z","lastTransitionTime":"2026-01-10T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.692267 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:47 crc kubenswrapper[4810]: E0110 06:46:47.692488 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.724374 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.724413 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.724422 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.724435 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.724444 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:47Z","lastTransitionTime":"2026-01-10T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.826986 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.827018 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.827026 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.827040 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.827047 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:47Z","lastTransitionTime":"2026-01-10T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.929573 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.929624 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.929635 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.929654 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:47 crc kubenswrapper[4810]: I0110 06:46:47.929667 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:47Z","lastTransitionTime":"2026-01-10T06:46:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.032964 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.033044 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.033070 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.033104 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.033127 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:48Z","lastTransitionTime":"2026-01-10T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.093125 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/0.log" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.095886 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02"} Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.096258 4810 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.109917 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.125733 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.135292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.135340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.135352 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.135371 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.135383 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:48Z","lastTransitionTime":"2026-01-10T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.137439 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.152665 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.165067 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.177154 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.177799 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:48 crc kubenswrapper[4810]: E0110 06:46:48.177983 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:46:48 crc kubenswrapper[4810]: E0110 06:46:48.178068 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs podName:6741fd18-31c0-4bc3-be74-c0f6080c67af nodeName:}" failed. No retries permitted until 2026-01-10 06:46:52.178042394 +0000 UTC m=+40.793535307 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs") pod "network-metrics-daemon-9nv84" (UID: "6741fd18-31c0-4bc3-be74-c0f6080c67af") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.194981 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.216102 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:46Z\\\",\\\"message\\\":\\\"ce (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0110 06:46:46.014069 5976 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0110 06:46:46.014236 5976 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0110 06:46:46.014239 5976 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0110 06:46:46.014725 5976 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0110 06:46:46.014841 5976 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:46.014864 5976 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0110 06:46:46.014869 5976 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0110 06:46:46.014898 5976 factory.go:656] Stopping watch factory\\\\nI0110 06:46:46.014897 5976 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:46.014913 5976 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0110 06:46:46.014987 5976 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:46.014949 5976 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.227726 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.237791 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.237823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.237831 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.237843 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.237851 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:48Z","lastTransitionTime":"2026-01-10T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.244540 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.254492 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.266138 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.299038 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.321473 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.340566 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.340639 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.340665 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.340694 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.340711 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:48Z","lastTransitionTime":"2026-01-10T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.346280 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.362540 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.378842 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.443631 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.443986 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.444117 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.444288 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.444474 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:48Z","lastTransitionTime":"2026-01-10T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.547561 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.547918 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.548060 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.548188 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.548430 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:48Z","lastTransitionTime":"2026-01-10T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.652032 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.652098 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.652121 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.652150 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.652171 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:48Z","lastTransitionTime":"2026-01-10T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.692958 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.693007 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:48 crc kubenswrapper[4810]: E0110 06:46:48.693164 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.692983 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:48 crc kubenswrapper[4810]: E0110 06:46:48.693292 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:48 crc kubenswrapper[4810]: E0110 06:46:48.693422 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.755591 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.755669 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.755693 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.755726 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.755746 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:48Z","lastTransitionTime":"2026-01-10T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.859112 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.859182 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.859243 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.859275 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.859292 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:48Z","lastTransitionTime":"2026-01-10T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.962139 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.962251 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.962271 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.962295 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:48 crc kubenswrapper[4810]: I0110 06:46:48.962312 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:48Z","lastTransitionTime":"2026-01-10T06:46:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.065720 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.065786 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.065804 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.065830 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.065846 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:49Z","lastTransitionTime":"2026-01-10T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.107092 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/1.log" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.107933 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/0.log" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.110551 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02" exitCode=1 Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.110611 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02"} Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.110670 4810 scope.go:117] "RemoveContainer" containerID="6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.111805 4810 scope.go:117] "RemoveContainer" containerID="e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02" Jan 10 06:46:49 crc kubenswrapper[4810]: E0110 06:46:49.112056 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.133852 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.155293 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.168607 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.168674 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.168700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.168733 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.168758 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:49Z","lastTransitionTime":"2026-01-10T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.173955 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.193884 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.213077 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.244278 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:46Z\\\",\\\"message\\\":\\\"ce (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0110 06:46:46.014069 5976 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0110 06:46:46.014236 5976 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0110 06:46:46.014239 5976 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0110 06:46:46.014725 5976 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0110 06:46:46.014841 5976 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:46.014864 5976 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0110 06:46:46.014869 5976 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0110 06:46:46.014898 5976 factory.go:656] Stopping watch factory\\\\nI0110 06:46:46.014897 5976 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:46.014913 5976 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0110 06:46:46.014987 5976 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:46.014949 5976 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:48Z\\\",\\\"message\\\":\\\"od event handler 6\\\\nI0110 06:46:48.082800 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0110 06:46:48.082885 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0110 06:46:48.082952 6264 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083045 6264 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0110 06:46:48.083106 6264 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0110 06:46:48.083250 6264 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083252 6264 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0110 06:46:48.083379 6264 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0110 06:46:48.083464 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:48.083463 6264 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0110 06:46:48.083575 6264 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0110 06:46:48.083639 6264 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:48.083641 6264 factory.go:656] Stopping watch factory\\\\nI0110 06:46:48.083748 6264 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:48.083851 6264 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0110 06:46:48.084018 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.271364 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.271422 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.271435 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.271455 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.271471 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:49Z","lastTransitionTime":"2026-01-10T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.272468 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.294067 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.311493 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.327658 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.343591 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.362941 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.374022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.374069 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.374081 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.374099 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.374112 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:49Z","lastTransitionTime":"2026-01-10T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.384033 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.406727 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.425408 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.442437 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.461325 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.477238 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.477288 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.477302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.477319 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.477331 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:49Z","lastTransitionTime":"2026-01-10T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.581137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.581237 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.581258 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.581284 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.581302 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:49Z","lastTransitionTime":"2026-01-10T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.684276 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.684340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.684360 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.684388 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.684405 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:49Z","lastTransitionTime":"2026-01-10T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.693350 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:49 crc kubenswrapper[4810]: E0110 06:46:49.693532 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.792545 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.792931 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.793279 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.793438 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.793580 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:49Z","lastTransitionTime":"2026-01-10T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.897290 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.897352 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.897370 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.897394 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:49 crc kubenswrapper[4810]: I0110 06:46:49.897412 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:49Z","lastTransitionTime":"2026-01-10T06:46:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.000461 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.000530 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.000550 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.000577 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.000595 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.103688 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.103755 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.103764 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.103795 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.103805 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.115808 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2"} Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.120678 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/1.log" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.138286 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.157932 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.187266 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:46Z\\\",\\\"message\\\":\\\"ce (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0110 06:46:46.014069 5976 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0110 06:46:46.014236 5976 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0110 06:46:46.014239 5976 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0110 06:46:46.014725 5976 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0110 06:46:46.014841 5976 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:46.014864 5976 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0110 06:46:46.014869 5976 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0110 06:46:46.014898 5976 factory.go:656] Stopping watch factory\\\\nI0110 06:46:46.014897 5976 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:46.014913 5976 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0110 06:46:46.014987 5976 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:46.014949 5976 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:48Z\\\",\\\"message\\\":\\\"od event handler 6\\\\nI0110 06:46:48.082800 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0110 06:46:48.082885 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0110 06:46:48.082952 6264 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083045 6264 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0110 06:46:48.083106 6264 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0110 06:46:48.083250 6264 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083252 6264 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0110 06:46:48.083379 6264 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0110 06:46:48.083464 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:48.083463 6264 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0110 06:46:48.083575 6264 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0110 06:46:48.083639 6264 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:48.083641 6264 factory.go:656] Stopping watch factory\\\\nI0110 06:46:48.083748 6264 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:48.083851 6264 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0110 06:46:48.084018 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.205791 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.207175 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.207298 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.207319 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.207342 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.207362 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.228014 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.244090 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.260271 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.290617 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.310954 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.311010 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.311030 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.311053 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.311071 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.311276 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.326068 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.342688 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.355714 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.367507 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.385444 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.402981 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.414586 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.414661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.414674 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.414692 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.414704 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.418889 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.438470 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.450046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.450126 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.450151 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.450185 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.450254 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: E0110 06:46:50.472268 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.477318 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.477396 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.477418 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.477444 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.477462 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: E0110 06:46:50.497236 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.501940 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.502187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.502430 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.502706 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.502870 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: E0110 06:46:50.521679 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.526083 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.526401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.526579 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.526777 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.527030 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: E0110 06:46:50.543956 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.548797 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.548873 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.548896 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.548927 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.548950 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: E0110 06:46:50.564024 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:50 crc kubenswrapper[4810]: E0110 06:46:50.564298 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.566135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.566359 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.566524 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.566722 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.566956 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.670767 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.670861 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.670879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.670902 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.670919 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.692662 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.692737 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.692735 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:50 crc kubenswrapper[4810]: E0110 06:46:50.693122 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:50 crc kubenswrapper[4810]: E0110 06:46:50.693496 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:50 crc kubenswrapper[4810]: E0110 06:46:50.693425 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.774459 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.774760 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.774999 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.775292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.775487 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.878868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.878934 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.878951 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.878976 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.879020 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.982274 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.982447 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.982479 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.982508 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:50 crc kubenswrapper[4810]: I0110 06:46:50.982533 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:50Z","lastTransitionTime":"2026-01-10T06:46:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.085367 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.085429 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.085446 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.085468 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.085485 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:51Z","lastTransitionTime":"2026-01-10T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.188457 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.188513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.188536 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.188567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.188587 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:51Z","lastTransitionTime":"2026-01-10T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.291902 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.291972 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.291995 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.292026 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.292047 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:51Z","lastTransitionTime":"2026-01-10T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.394681 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.394725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.394739 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.394756 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.394769 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:51Z","lastTransitionTime":"2026-01-10T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.498593 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.498621 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.498628 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.498641 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.498650 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:51Z","lastTransitionTime":"2026-01-10T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.601804 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.601869 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.601892 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.601920 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.601942 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:51Z","lastTransitionTime":"2026-01-10T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.692760 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:51 crc kubenswrapper[4810]: E0110 06:46:51.693001 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.704645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.704703 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.704720 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.704856 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.705512 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:51Z","lastTransitionTime":"2026-01-10T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.714453 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.732128 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.751399 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.774849 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.800646 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.808137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.808242 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.808266 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.808300 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.808322 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:51Z","lastTransitionTime":"2026-01-10T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.820885 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.840285 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.861129 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.886463 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.910611 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.910746 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.910800 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.910844 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.910863 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:51Z","lastTransitionTime":"2026-01-10T06:46:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.919253 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6abd9c0b0cd346fdba11ec91c61623e1c5e64be9b208a2c594bf61161885a6ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:46Z\\\",\\\"message\\\":\\\"ce (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0110 06:46:46.014069 5976 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0110 06:46:46.014236 5976 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0110 06:46:46.014239 5976 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0110 06:46:46.014725 5976 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0110 06:46:46.014841 5976 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:46.014864 5976 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0110 06:46:46.014869 5976 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0110 06:46:46.014898 5976 factory.go:656] Stopping watch factory\\\\nI0110 06:46:46.014897 5976 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:46.014913 5976 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0110 06:46:46.014987 5976 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:46.014949 5976 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:48Z\\\",\\\"message\\\":\\\"od event handler 6\\\\nI0110 06:46:48.082800 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0110 06:46:48.082885 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0110 06:46:48.082952 6264 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083045 6264 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0110 06:46:48.083106 6264 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0110 06:46:48.083250 6264 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083252 6264 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0110 06:46:48.083379 6264 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0110 06:46:48.083464 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:48.083463 6264 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0110 06:46:48.083575 6264 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0110 06:46:48.083639 6264 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:48.083641 6264 factory.go:656] Stopping watch factory\\\\nI0110 06:46:48.083748 6264 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:48.083851 6264 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0110 06:46:48.084018 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.938564 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.958047 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.974743 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:51 crc kubenswrapper[4810]: I0110 06:46:51.990305 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.022426 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.022608 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.022691 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.022709 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.022734 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.022751 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:52Z","lastTransitionTime":"2026-01-10T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.045452 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.061569 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.125841 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.125932 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.125951 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.125978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.125996 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:52Z","lastTransitionTime":"2026-01-10T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.219079 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:52 crc kubenswrapper[4810]: E0110 06:46:52.219350 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:46:52 crc kubenswrapper[4810]: E0110 06:46:52.219459 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs podName:6741fd18-31c0-4bc3-be74-c0f6080c67af nodeName:}" failed. No retries permitted until 2026-01-10 06:47:00.219430997 +0000 UTC m=+48.834923910 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs") pod "network-metrics-daemon-9nv84" (UID: "6741fd18-31c0-4bc3-be74-c0f6080c67af") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.228881 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.229028 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.229056 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.229102 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.229133 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:52Z","lastTransitionTime":"2026-01-10T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.331801 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.331877 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.331900 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.331937 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.331960 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:52Z","lastTransitionTime":"2026-01-10T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.435263 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.435334 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.435362 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.435394 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.435417 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:52Z","lastTransitionTime":"2026-01-10T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.538337 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.538392 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.538409 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.538432 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.538449 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:52Z","lastTransitionTime":"2026-01-10T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.641997 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.642073 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.642091 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.642119 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.642140 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:52Z","lastTransitionTime":"2026-01-10T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.692676 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.692776 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.692859 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:52 crc kubenswrapper[4810]: E0110 06:46:52.693024 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:52 crc kubenswrapper[4810]: E0110 06:46:52.693250 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:52 crc kubenswrapper[4810]: E0110 06:46:52.693423 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.721459 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.722806 4810 scope.go:117] "RemoveContainer" containerID="e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02" Jan 10 06:46:52 crc kubenswrapper[4810]: E0110 06:46:52.723055 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.743648 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.749586 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.749635 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.749655 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.749679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.749696 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:52Z","lastTransitionTime":"2026-01-10T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.762028 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.777546 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.811038 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.827978 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.848786 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.852642 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.852762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.852782 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.852807 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.852829 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:52Z","lastTransitionTime":"2026-01-10T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.869115 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.886160 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.901433 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.921331 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.941855 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.956153 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.956241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.956260 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.956286 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.956305 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:52Z","lastTransitionTime":"2026-01-10T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.961968 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.977791 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:52 crc kubenswrapper[4810]: I0110 06:46:52.994168 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.012620 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.039307 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:48Z\\\",\\\"message\\\":\\\"od event handler 6\\\\nI0110 06:46:48.082800 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0110 06:46:48.082885 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0110 06:46:48.082952 6264 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083045 6264 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0110 06:46:48.083106 6264 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0110 06:46:48.083250 6264 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083252 6264 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0110 06:46:48.083379 6264 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0110 06:46:48.083464 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:48.083463 6264 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0110 06:46:48.083575 6264 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0110 06:46:48.083639 6264 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:48.083641 6264 factory.go:656] Stopping watch factory\\\\nI0110 06:46:48.083748 6264 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:48.083851 6264 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0110 06:46:48.084018 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.054115 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.058621 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.058684 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.058697 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.058715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.058728 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:53Z","lastTransitionTime":"2026-01-10T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.161321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.161375 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.161394 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.161418 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.161464 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:53Z","lastTransitionTime":"2026-01-10T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.264440 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.265412 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.265567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.265771 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.265914 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:53Z","lastTransitionTime":"2026-01-10T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.368884 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.369302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.369449 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.369627 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.369760 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:53Z","lastTransitionTime":"2026-01-10T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.473015 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.473086 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.473104 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.473127 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.473145 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:53Z","lastTransitionTime":"2026-01-10T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.576447 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.576523 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.576540 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.576567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.576585 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:53Z","lastTransitionTime":"2026-01-10T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.679710 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.679770 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.679788 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.679814 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.679831 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:53Z","lastTransitionTime":"2026-01-10T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.692279 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:53 crc kubenswrapper[4810]: E0110 06:46:53.692508 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.781888 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.781944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.781966 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.781991 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.782021 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:53Z","lastTransitionTime":"2026-01-10T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.885079 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.885140 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.885157 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.885185 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.885234 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:53Z","lastTransitionTime":"2026-01-10T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.988381 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.988446 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.988486 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.988512 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:53 crc kubenswrapper[4810]: I0110 06:46:53.988533 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:53Z","lastTransitionTime":"2026-01-10T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.091004 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.091080 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.091100 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.091126 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.091146 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:54Z","lastTransitionTime":"2026-01-10T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.194385 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.194448 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.194472 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.194501 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.194522 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:54Z","lastTransitionTime":"2026-01-10T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.303166 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.303257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.303276 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.303301 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.303319 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:54Z","lastTransitionTime":"2026-01-10T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.406481 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.406540 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.406558 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.406584 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.406603 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:54Z","lastTransitionTime":"2026-01-10T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.509664 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.509760 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.509783 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.509812 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.509834 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:54Z","lastTransitionTime":"2026-01-10T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.613254 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.613354 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.613379 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.613413 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.613437 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:54Z","lastTransitionTime":"2026-01-10T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.692957 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:54 crc kubenswrapper[4810]: E0110 06:46:54.693450 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.693908 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:54 crc kubenswrapper[4810]: E0110 06:46:54.694170 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.694509 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:54 crc kubenswrapper[4810]: E0110 06:46:54.694723 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.716401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.716461 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.716480 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.716506 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.716525 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:54Z","lastTransitionTime":"2026-01-10T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.828302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.828352 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.828370 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.828392 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.828410 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:54Z","lastTransitionTime":"2026-01-10T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.930763 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.930825 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.930842 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.930866 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:54 crc kubenswrapper[4810]: I0110 06:46:54.930883 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:54Z","lastTransitionTime":"2026-01-10T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.033957 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.034019 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.034036 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.034060 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.034117 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:55Z","lastTransitionTime":"2026-01-10T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.136659 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.136731 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.136748 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.136773 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.136791 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:55Z","lastTransitionTime":"2026-01-10T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.240081 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.240143 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.240162 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.240186 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.240248 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:55Z","lastTransitionTime":"2026-01-10T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.343181 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.343454 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.343472 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.343499 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.343521 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:55Z","lastTransitionTime":"2026-01-10T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.446852 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.446936 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.446953 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.446978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.446998 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:55Z","lastTransitionTime":"2026-01-10T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.550593 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.550666 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.550684 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.550733 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.550752 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:55Z","lastTransitionTime":"2026-01-10T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.653415 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.653480 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.653498 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.653522 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.653540 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:55Z","lastTransitionTime":"2026-01-10T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.692160 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:55 crc kubenswrapper[4810]: E0110 06:46:55.692467 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.756468 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.756549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.756566 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.756589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.756609 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:55Z","lastTransitionTime":"2026-01-10T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.859806 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.859862 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.859879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.859902 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.859920 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:55Z","lastTransitionTime":"2026-01-10T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.962225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.962283 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.962299 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.962321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:55 crc kubenswrapper[4810]: I0110 06:46:55.962339 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:55Z","lastTransitionTime":"2026-01-10T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.065526 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.065592 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.065609 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.065635 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.065655 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:56Z","lastTransitionTime":"2026-01-10T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.168806 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.168894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.168920 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.168952 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.168973 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:56Z","lastTransitionTime":"2026-01-10T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.272105 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.272150 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.272169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.272230 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.272249 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:56Z","lastTransitionTime":"2026-01-10T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.375057 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.375147 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.375165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.375242 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.375260 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:56Z","lastTransitionTime":"2026-01-10T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.478073 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.478125 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.478142 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.478165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.478183 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:56Z","lastTransitionTime":"2026-01-10T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.581303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.581362 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.581384 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.581412 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.581433 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:56Z","lastTransitionTime":"2026-01-10T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.685552 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.685605 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.685698 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.685760 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.685780 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:56Z","lastTransitionTime":"2026-01-10T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.692338 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.692386 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.692358 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:56 crc kubenswrapper[4810]: E0110 06:46:56.692536 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:56 crc kubenswrapper[4810]: E0110 06:46:56.692686 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:56 crc kubenswrapper[4810]: E0110 06:46:56.692845 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.789446 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.789522 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.789540 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.789563 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.789613 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:56Z","lastTransitionTime":"2026-01-10T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.892617 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.892679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.892696 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.892721 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.892737 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:56Z","lastTransitionTime":"2026-01-10T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.995692 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.995758 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.995785 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.995815 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:56 crc kubenswrapper[4810]: I0110 06:46:56.995836 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:56Z","lastTransitionTime":"2026-01-10T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.098705 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.098765 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.098782 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.098807 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.098825 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:57Z","lastTransitionTime":"2026-01-10T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.201430 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.201503 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.201524 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.201551 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.201569 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:57Z","lastTransitionTime":"2026-01-10T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.303989 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.304061 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.304084 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.304113 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.304134 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:57Z","lastTransitionTime":"2026-01-10T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.406561 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.406647 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.406661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.406677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.406688 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:57Z","lastTransitionTime":"2026-01-10T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.413309 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.425143 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.435072 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.452171 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.472915 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.495794 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.509890 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.509990 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.510007 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.510034 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.510051 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:57Z","lastTransitionTime":"2026-01-10T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.527380 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:48Z\\\",\\\"message\\\":\\\"od event handler 6\\\\nI0110 06:46:48.082800 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0110 06:46:48.082885 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0110 06:46:48.082952 6264 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083045 6264 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0110 06:46:48.083106 6264 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0110 06:46:48.083250 6264 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083252 6264 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0110 06:46:48.083379 6264 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0110 06:46:48.083464 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:48.083463 6264 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0110 06:46:48.083575 6264 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0110 06:46:48.083639 6264 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:48.083641 6264 factory.go:656] Stopping watch factory\\\\nI0110 06:46:48.083748 6264 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:48.083851 6264 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0110 06:46:48.084018 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.546176 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.567845 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.586635 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.602315 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.613766 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.613822 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.613841 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.613865 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.613881 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:57Z","lastTransitionTime":"2026-01-10T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.634881 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.651277 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.670122 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.688534 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.692574 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:57 crc kubenswrapper[4810]: E0110 06:46:57.692724 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.708597 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.716151 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.716257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.716284 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.716313 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.716353 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:57Z","lastTransitionTime":"2026-01-10T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.724615 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.746650 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.767173 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.819977 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.820042 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.820064 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.820095 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.820119 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:57Z","lastTransitionTime":"2026-01-10T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.922653 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.922770 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.922905 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.922951 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:57 crc kubenswrapper[4810]: I0110 06:46:57.922979 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:57Z","lastTransitionTime":"2026-01-10T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.025972 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.026083 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.026152 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.026185 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.026262 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:58Z","lastTransitionTime":"2026-01-10T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.129168 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.129300 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.129331 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.129362 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.129384 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:58Z","lastTransitionTime":"2026-01-10T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.232553 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.232611 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.232629 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.232655 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.232671 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:58Z","lastTransitionTime":"2026-01-10T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.336251 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.336314 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.336331 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.336356 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.336372 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:58Z","lastTransitionTime":"2026-01-10T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.439938 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.439985 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.440001 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.440025 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.440041 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:58Z","lastTransitionTime":"2026-01-10T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.548004 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.548064 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.548083 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.548106 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.548123 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:58Z","lastTransitionTime":"2026-01-10T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.652428 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.652472 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.652484 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.652500 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.652512 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:58Z","lastTransitionTime":"2026-01-10T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.693031 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.693053 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:46:58 crc kubenswrapper[4810]: E0110 06:46:58.693307 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:46:58 crc kubenswrapper[4810]: E0110 06:46:58.693376 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.693967 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:46:58 crc kubenswrapper[4810]: E0110 06:46:58.694169 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.756011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.756080 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.756104 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.756135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.756160 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:58Z","lastTransitionTime":"2026-01-10T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.859143 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.859266 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.859286 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.859312 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.859330 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:58Z","lastTransitionTime":"2026-01-10T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.965312 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.965398 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.965425 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.965459 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:58 crc kubenswrapper[4810]: I0110 06:46:58.965482 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:58Z","lastTransitionTime":"2026-01-10T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.068520 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.068576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.068592 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.068615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.068629 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:59Z","lastTransitionTime":"2026-01-10T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.171518 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.171573 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.171589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.171612 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.171628 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:59Z","lastTransitionTime":"2026-01-10T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.274650 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.274709 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.274729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.274752 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.274770 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:59Z","lastTransitionTime":"2026-01-10T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.378184 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.378278 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.378295 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.378321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.378343 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:59Z","lastTransitionTime":"2026-01-10T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.481255 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.481299 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.481315 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.481367 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.481384 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:59Z","lastTransitionTime":"2026-01-10T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.584187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.584287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.584310 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.584333 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.584351 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:59Z","lastTransitionTime":"2026-01-10T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.687292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.687368 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.687398 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.687427 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.687486 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:59Z","lastTransitionTime":"2026-01-10T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.692909 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:46:59 crc kubenswrapper[4810]: E0110 06:46:59.693104 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.790410 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.790467 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.790484 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.790507 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.790524 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:59Z","lastTransitionTime":"2026-01-10T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.893877 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.893976 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.894001 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.894036 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.894062 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:59Z","lastTransitionTime":"2026-01-10T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.997451 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.997673 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.997700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.997723 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:46:59 crc kubenswrapper[4810]: I0110 06:46:59.997741 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:46:59Z","lastTransitionTime":"2026-01-10T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.101290 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.101337 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.101354 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.101379 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.101396 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.203754 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.204012 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.204039 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.204062 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.204078 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.306742 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.306819 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.306843 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.306872 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.306893 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.313289 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:00 crc kubenswrapper[4810]: E0110 06:47:00.313464 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:47:00 crc kubenswrapper[4810]: E0110 06:47:00.313568 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs podName:6741fd18-31c0-4bc3-be74-c0f6080c67af nodeName:}" failed. No retries permitted until 2026-01-10 06:47:16.313537972 +0000 UTC m=+64.929030885 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs") pod "network-metrics-daemon-9nv84" (UID: "6741fd18-31c0-4bc3-be74-c0f6080c67af") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.410138 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.410449 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.410497 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.410523 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.410896 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.514359 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.514429 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.514451 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.514477 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.514495 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.617401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.617467 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.617484 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.617508 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.617524 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.679807 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.679917 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.679938 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.679968 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.679989 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.692228 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.692290 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:00 crc kubenswrapper[4810]: E0110 06:47:00.692367 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:00 crc kubenswrapper[4810]: E0110 06:47:00.692463 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.692548 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:00 crc kubenswrapper[4810]: E0110 06:47:00.692680 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:00 crc kubenswrapper[4810]: E0110 06:47:00.696756 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.700931 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.700978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.700994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.701018 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.701038 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: E0110 06:47:00.720281 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.726130 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.726232 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.726256 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.726283 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.726311 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: E0110 06:47:00.746848 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.751415 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.751487 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.751513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.751543 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.751565 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: E0110 06:47:00.766466 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.771359 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.771450 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.771469 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.771921 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.771976 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: E0110 06:47:00.788565 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:00 crc kubenswrapper[4810]: E0110 06:47:00.788800 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.791089 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.791138 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.791156 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.791186 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.791238 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.895334 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.895419 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.895434 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.895455 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.895467 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.998937 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.999000 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.999022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.999051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:00 crc kubenswrapper[4810]: I0110 06:47:00.999070 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:00Z","lastTransitionTime":"2026-01-10T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.102734 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.102790 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.102845 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.102897 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.102919 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:01Z","lastTransitionTime":"2026-01-10T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.206069 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.206125 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.206143 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.206168 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.206188 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:01Z","lastTransitionTime":"2026-01-10T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.309384 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.309445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.309478 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.309507 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.309529 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:01Z","lastTransitionTime":"2026-01-10T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.411837 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.411900 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.411921 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.411944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.411961 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:01Z","lastTransitionTime":"2026-01-10T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.515630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.515696 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.515717 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.515747 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.515770 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:01Z","lastTransitionTime":"2026-01-10T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.619260 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.619320 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.619338 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.619365 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.619383 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:01Z","lastTransitionTime":"2026-01-10T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.692464 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.692644 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.707826 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.724859 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.724936 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.724987 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.725023 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.725042 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:01Z","lastTransitionTime":"2026-01-10T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.727248 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.727458 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:47:33.727424156 +0000 UTC m=+82.342917219 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.727521 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.727696 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.727785 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:47:33.727760574 +0000 UTC m=+82.343253487 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.738752 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.759629 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.778954 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.796733 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.810611 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.823015 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.828285 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.828338 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.828385 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.828430 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.828454 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.828485 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.828507 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:01Z","lastTransitionTime":"2026-01-10T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.828559 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.828626 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:47:33.828604084 +0000 UTC m=+82.444097007 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.828629 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.828675 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.828699 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.828724 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.828432 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.828743 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.828818 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.828781 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-10 06:47:33.828750587 +0000 UTC m=+82.444243500 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:47:01 crc kubenswrapper[4810]: E0110 06:47:01.828919 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-10 06:47:33.82889345 +0000 UTC m=+82.444386573 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.843918 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.866385 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.888344 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.908521 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.933325 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.933422 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.933546 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.933578 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.933599 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:01Z","lastTransitionTime":"2026-01-10T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.936319 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.961604 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:01 crc kubenswrapper[4810]: I0110 06:47:01.994117 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:48Z\\\",\\\"message\\\":\\\"od event handler 6\\\\nI0110 06:46:48.082800 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0110 06:46:48.082885 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0110 06:46:48.082952 6264 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083045 6264 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0110 06:46:48.083106 6264 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0110 06:46:48.083250 6264 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083252 6264 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0110 06:46:48.083379 6264 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0110 06:46:48.083464 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:48.083463 6264 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0110 06:46:48.083575 6264 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0110 06:46:48.083639 6264 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:48.083641 6264 factory.go:656] Stopping watch factory\\\\nI0110 06:46:48.083748 6264 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:48.083851 6264 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0110 06:46:48.084018 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.009719 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.029626 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.039581 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.039613 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.039622 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.039637 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.039650 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:02Z","lastTransitionTime":"2026-01-10T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.048894 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.065595 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.142295 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.142365 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.142383 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.142819 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.142877 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:02Z","lastTransitionTime":"2026-01-10T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.246162 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.246229 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.246241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.246260 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.246271 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:02Z","lastTransitionTime":"2026-01-10T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.349550 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.349599 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.349613 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.349630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.349642 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:02Z","lastTransitionTime":"2026-01-10T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.452893 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.452950 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.452962 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.452982 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.452995 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:02Z","lastTransitionTime":"2026-01-10T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.555583 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.555698 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.555711 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.555727 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.555738 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:02Z","lastTransitionTime":"2026-01-10T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.658581 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.658623 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.658636 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.658653 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.658666 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:02Z","lastTransitionTime":"2026-01-10T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.691976 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.692013 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:02 crc kubenswrapper[4810]: E0110 06:47:02.692118 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.692129 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:02 crc kubenswrapper[4810]: E0110 06:47:02.692329 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:02 crc kubenswrapper[4810]: E0110 06:47:02.692455 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.761045 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.761101 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.761113 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.761128 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.761159 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:02Z","lastTransitionTime":"2026-01-10T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.863306 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.863363 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.863380 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.863405 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.863425 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:02Z","lastTransitionTime":"2026-01-10T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.966227 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.966265 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.966299 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.966318 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:02 crc kubenswrapper[4810]: I0110 06:47:02.966328 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:02Z","lastTransitionTime":"2026-01-10T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.069343 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.069661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.069683 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.069709 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.069730 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:03Z","lastTransitionTime":"2026-01-10T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.172045 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.172086 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.172118 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.172137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.172148 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:03Z","lastTransitionTime":"2026-01-10T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.279630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.279707 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.279748 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.279781 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.279804 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:03Z","lastTransitionTime":"2026-01-10T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.382689 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.382726 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.382738 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.382755 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.382768 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:03Z","lastTransitionTime":"2026-01-10T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.485302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.485368 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.485385 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.485410 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.485428 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:03Z","lastTransitionTime":"2026-01-10T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.588431 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.588494 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.588505 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.588521 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.588535 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:03Z","lastTransitionTime":"2026-01-10T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.691824 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.691919 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.691939 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.691965 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.691981 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:03Z","lastTransitionTime":"2026-01-10T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.692021 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:03 crc kubenswrapper[4810]: E0110 06:47:03.692218 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.795329 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.795375 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.795386 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.795406 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.795418 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:03Z","lastTransitionTime":"2026-01-10T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.898364 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.898424 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.898441 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.898467 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:03 crc kubenswrapper[4810]: I0110 06:47:03.898485 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:03Z","lastTransitionTime":"2026-01-10T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.001838 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.001919 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.001940 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.001971 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.001993 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:04Z","lastTransitionTime":"2026-01-10T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.104495 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.104566 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.104591 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.104615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.104632 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:04Z","lastTransitionTime":"2026-01-10T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.208143 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.208185 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.208240 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.208261 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.208278 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:04Z","lastTransitionTime":"2026-01-10T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.310751 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.310823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.310833 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.310846 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.310856 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:04Z","lastTransitionTime":"2026-01-10T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.414409 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.414520 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.414580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.414606 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.414670 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:04Z","lastTransitionTime":"2026-01-10T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.517015 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.517082 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.517099 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.517115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.517128 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:04Z","lastTransitionTime":"2026-01-10T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.619799 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.619858 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.619874 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.619893 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.619909 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:04Z","lastTransitionTime":"2026-01-10T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.692559 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.692625 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.692673 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:04 crc kubenswrapper[4810]: E0110 06:47:04.692895 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:04 crc kubenswrapper[4810]: E0110 06:47:04.692987 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:04 crc kubenswrapper[4810]: E0110 06:47:04.693095 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.723056 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.723119 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.723136 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.723163 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.723182 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:04Z","lastTransitionTime":"2026-01-10T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.825838 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.825899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.825917 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.825939 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.825959 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:04Z","lastTransitionTime":"2026-01-10T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.928319 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.928377 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.928388 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.928408 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:04 crc kubenswrapper[4810]: I0110 06:47:04.928419 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:04Z","lastTransitionTime":"2026-01-10T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.031600 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.032173 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.032421 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.032991 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.033238 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:05Z","lastTransitionTime":"2026-01-10T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.136551 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.136596 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.136608 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.136633 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.136651 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:05Z","lastTransitionTime":"2026-01-10T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.239997 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.240315 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.240462 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.240605 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.240786 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:05Z","lastTransitionTime":"2026-01-10T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.344271 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.344345 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.344366 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.344395 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.344414 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:05Z","lastTransitionTime":"2026-01-10T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.448094 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.448158 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.448181 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.448261 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.448288 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:05Z","lastTransitionTime":"2026-01-10T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.550510 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.550544 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.550554 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.550570 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.550581 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:05Z","lastTransitionTime":"2026-01-10T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.657098 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.657150 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.657169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.657463 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.657510 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:05Z","lastTransitionTime":"2026-01-10T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.692647 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:05 crc kubenswrapper[4810]: E0110 06:47:05.693231 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.693822 4810 scope.go:117] "RemoveContainer" containerID="e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.761087 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.761145 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.761163 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.761186 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.761236 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:05Z","lastTransitionTime":"2026-01-10T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.864632 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.864707 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.864733 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.864766 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.864788 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:05Z","lastTransitionTime":"2026-01-10T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.967996 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.968081 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.968093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.968130 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:05 crc kubenswrapper[4810]: I0110 06:47:05.968141 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:05Z","lastTransitionTime":"2026-01-10T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.070245 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.070290 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.070299 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.070318 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.070327 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:06Z","lastTransitionTime":"2026-01-10T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.173157 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.173222 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.173236 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.173254 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.173268 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:06Z","lastTransitionTime":"2026-01-10T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.178000 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/1.log" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.181064 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2"} Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.181549 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.201065 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.222231 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.248257 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.275549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.275605 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.275619 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.275636 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.275649 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:06Z","lastTransitionTime":"2026-01-10T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.281610 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:48Z\\\",\\\"message\\\":\\\"od event handler 6\\\\nI0110 06:46:48.082800 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0110 06:46:48.082885 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0110 06:46:48.082952 6264 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083045 6264 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0110 06:46:48.083106 6264 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0110 06:46:48.083250 6264 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083252 6264 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0110 06:46:48.083379 6264 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0110 06:46:48.083464 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:48.083463 6264 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0110 06:46:48.083575 6264 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0110 06:46:48.083639 6264 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:48.083641 6264 factory.go:656] Stopping watch factory\\\\nI0110 06:46:48.083748 6264 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:48.083851 6264 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0110 06:46:48.084018 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.295810 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.308621 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.319409 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.331469 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.341685 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.364818 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.378061 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.378111 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.378121 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.378134 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.378143 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:06Z","lastTransitionTime":"2026-01-10T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.382293 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.392789 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.405073 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.416086 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.425253 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.438855 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.457102 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.469897 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:06Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.480027 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.480071 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.480085 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.480102 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.480115 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:06Z","lastTransitionTime":"2026-01-10T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.586289 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.586401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.586426 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.586473 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.586492 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:06Z","lastTransitionTime":"2026-01-10T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.689544 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.689580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.689589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.689606 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.689616 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:06Z","lastTransitionTime":"2026-01-10T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.692810 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.692844 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:06 crc kubenswrapper[4810]: E0110 06:47:06.692901 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:06 crc kubenswrapper[4810]: E0110 06:47:06.693005 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.693072 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:06 crc kubenswrapper[4810]: E0110 06:47:06.693146 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.792795 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.792842 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.792852 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.792868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.792878 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:06Z","lastTransitionTime":"2026-01-10T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.896127 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.896187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.896230 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.896257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.896277 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:06Z","lastTransitionTime":"2026-01-10T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.999162 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.999226 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.999235 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.999252 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:06 crc kubenswrapper[4810]: I0110 06:47:06.999262 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:06Z","lastTransitionTime":"2026-01-10T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.103004 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.103061 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.103078 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.103106 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.103124 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:07Z","lastTransitionTime":"2026-01-10T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.187959 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/2.log" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.188924 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/1.log" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.193265 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2" exitCode=1 Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.193325 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2"} Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.193373 4810 scope.go:117] "RemoveContainer" containerID="e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.195005 4810 scope.go:117] "RemoveContainer" containerID="7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2" Jan 10 06:47:07 crc kubenswrapper[4810]: E0110 06:47:07.198288 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.207441 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.207491 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.207509 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.207531 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.207547 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:07Z","lastTransitionTime":"2026-01-10T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.226582 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.242269 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.258557 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.270767 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.285027 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.304857 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.310549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.310621 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.310645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.310676 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.310697 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:07Z","lastTransitionTime":"2026-01-10T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.320295 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.339375 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.352392 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.365819 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.386248 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.404599 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.414638 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.414679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.414692 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.414708 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.414721 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:07Z","lastTransitionTime":"2026-01-10T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.418604 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.433782 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.455170 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.473012 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.504792 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e0fe49528fe71b6e0701e95533b6f5de7316c5d19213f4275383522fb5befc02\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:46:48Z\\\",\\\"message\\\":\\\"od event handler 6\\\\nI0110 06:46:48.082800 6264 handler.go:208] Removed *v1.Node event handler 2\\\\nI0110 06:46:48.082885 6264 handler.go:208] Removed *v1.Node event handler 7\\\\nI0110 06:46:48.082952 6264 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083045 6264 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0110 06:46:48.083106 6264 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0110 06:46:48.083250 6264 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0110 06:46:48.083252 6264 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0110 06:46:48.083379 6264 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0110 06:46:48.083464 6264 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0110 06:46:48.083463 6264 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0110 06:46:48.083575 6264 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0110 06:46:48.083639 6264 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0110 06:46:48.083641 6264 factory.go:656] Stopping watch factory\\\\nI0110 06:46:48.083748 6264 ovnkube.go:599] Stopped ovnkube\\\\nI0110 06:46:48.083851 6264 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0110 06:46:48.084018 6264 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:07Z\\\",\\\"message\\\":\\\"-85b44fc459-gdk6g in node crc\\\\nI0110 06:47:06.665499 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0110 06:47:06.665490 6435 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:06.665521 6435 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0110 06:47:06.665552 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.517063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.517108 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.517120 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.517137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.517149 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:07Z","lastTransitionTime":"2026-01-10T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.521939 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:07Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.620016 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.620350 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.620445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.620531 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.620616 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:07Z","lastTransitionTime":"2026-01-10T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.693094 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:07 crc kubenswrapper[4810]: E0110 06:47:07.693311 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.723315 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.723351 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.723362 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.723378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.723388 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:07Z","lastTransitionTime":"2026-01-10T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.831397 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.831464 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.831483 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.831510 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.831537 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:07Z","lastTransitionTime":"2026-01-10T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.934470 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.934535 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.934553 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.934578 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:07 crc kubenswrapper[4810]: I0110 06:47:07.934595 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:07Z","lastTransitionTime":"2026-01-10T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.037894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.037935 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.037944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.037958 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.037969 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:08Z","lastTransitionTime":"2026-01-10T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.141372 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.141479 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.141506 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.141542 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.141570 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:08Z","lastTransitionTime":"2026-01-10T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.199370 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/2.log" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.203691 4810 scope.go:117] "RemoveContainer" containerID="7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2" Jan 10 06:47:08 crc kubenswrapper[4810]: E0110 06:47:08.203828 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.221271 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.242331 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.244109 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.244157 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.244174 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.244232 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.244251 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:08Z","lastTransitionTime":"2026-01-10T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.263875 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.286029 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.304486 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.323153 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.340306 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.347374 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.347452 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.347477 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.347516 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.347541 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:08Z","lastTransitionTime":"2026-01-10T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.359897 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.379286 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.401006 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.419947 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.441744 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.450068 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.450381 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.450519 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.450664 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.450802 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:08Z","lastTransitionTime":"2026-01-10T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.466990 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.499036 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:07Z\\\",\\\"message\\\":\\\"-85b44fc459-gdk6g in node crc\\\\nI0110 06:47:06.665499 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0110 06:47:06.665490 6435 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:06.665521 6435 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0110 06:47:06.665552 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.516532 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.535106 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.553525 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.553585 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.553608 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.553638 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.553661 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:08Z","lastTransitionTime":"2026-01-10T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.556322 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.573118 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:08Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.656929 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.656988 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.657007 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.657032 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.657050 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:08Z","lastTransitionTime":"2026-01-10T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.692339 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.692386 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.692356 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:08 crc kubenswrapper[4810]: E0110 06:47:08.692581 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:08 crc kubenswrapper[4810]: E0110 06:47:08.692692 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:08 crc kubenswrapper[4810]: E0110 06:47:08.692813 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.761011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.761077 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.761099 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.761129 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.761153 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:08Z","lastTransitionTime":"2026-01-10T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.863794 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.863839 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.863853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.863872 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.863884 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:08Z","lastTransitionTime":"2026-01-10T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.967049 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.967116 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.967127 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.967148 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:08 crc kubenswrapper[4810]: I0110 06:47:08.967159 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:08Z","lastTransitionTime":"2026-01-10T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.069933 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.070094 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.070114 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.070150 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.070169 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:09Z","lastTransitionTime":"2026-01-10T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.173354 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.173408 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.173425 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.173451 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.173469 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:09Z","lastTransitionTime":"2026-01-10T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.276103 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.276145 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.276157 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.276186 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.276216 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:09Z","lastTransitionTime":"2026-01-10T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.379055 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.379113 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.379130 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.379154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.379173 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:09Z","lastTransitionTime":"2026-01-10T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.482953 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.483016 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.483085 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.483114 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.483137 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:09Z","lastTransitionTime":"2026-01-10T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.592469 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.592541 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.592565 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.592596 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.592617 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:09Z","lastTransitionTime":"2026-01-10T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.692790 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:09 crc kubenswrapper[4810]: E0110 06:47:09.692991 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.694732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.694789 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.694809 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.694832 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.694850 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:09Z","lastTransitionTime":"2026-01-10T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.797315 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.797391 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.797409 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.797436 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.797459 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:09Z","lastTransitionTime":"2026-01-10T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.900094 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.900176 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.900238 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.900304 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:09 crc kubenswrapper[4810]: I0110 06:47:09.900328 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:09Z","lastTransitionTime":"2026-01-10T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.003252 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.003308 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.003327 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.003350 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.003367 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:10Z","lastTransitionTime":"2026-01-10T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.106338 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.106403 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.106425 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.106453 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.106476 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:10Z","lastTransitionTime":"2026-01-10T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.209391 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.209487 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.209507 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.209533 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.209550 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:10Z","lastTransitionTime":"2026-01-10T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.313187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.313311 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.313335 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.313368 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.313387 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:10Z","lastTransitionTime":"2026-01-10T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.416950 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.417017 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.417038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.417063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.417082 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:10Z","lastTransitionTime":"2026-01-10T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.520313 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.520371 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.520387 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.520410 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.520428 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:10Z","lastTransitionTime":"2026-01-10T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.622941 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.623008 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.623027 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.623052 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.623075 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:10Z","lastTransitionTime":"2026-01-10T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.692757 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.692802 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.692766 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:10 crc kubenswrapper[4810]: E0110 06:47:10.692942 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:10 crc kubenswrapper[4810]: E0110 06:47:10.693068 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:10 crc kubenswrapper[4810]: E0110 06:47:10.693260 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.726292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.726361 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.726386 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.726418 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.726442 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:10Z","lastTransitionTime":"2026-01-10T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.831888 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.831963 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.831987 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.832014 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.832033 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:10Z","lastTransitionTime":"2026-01-10T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.935083 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.935243 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.935276 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.935305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:10 crc kubenswrapper[4810]: I0110 06:47:10.935326 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:10Z","lastTransitionTime":"2026-01-10T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.038072 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.038137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.038154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.038179 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.038221 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.141070 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.141142 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.141165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.141225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.141250 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.174514 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.174576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.174598 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.174629 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.174654 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: E0110 06:47:11.196907 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.202318 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.202357 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.202368 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.202388 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.202402 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: E0110 06:47:11.222805 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.228369 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.228454 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.228473 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.228495 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.228512 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: E0110 06:47:11.249380 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.254478 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.254531 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.254548 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.254573 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.254590 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: E0110 06:47:11.274656 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.279773 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.279822 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.279841 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.279863 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.279881 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: E0110 06:47:11.300442 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: E0110 06:47:11.300543 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.302024 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.302089 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.302110 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.302135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.302153 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.405497 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.405558 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.405575 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.405600 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.405619 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.508266 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.508326 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.508343 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.508367 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.508389 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.612096 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.612172 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.612190 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.612259 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.612277 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.692009 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:11 crc kubenswrapper[4810]: E0110 06:47:11.692589 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.710293 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.715135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.715233 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.715347 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.715384 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.715409 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.729071 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.749980 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.769869 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.787739 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.809169 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.818165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.818225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.818239 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.818255 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.818266 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.830962 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.847767 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.862047 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.881581 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.898637 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.920495 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.920534 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.920546 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.920565 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.920577 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:11Z","lastTransitionTime":"2026-01-10T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.934942 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:07Z\\\",\\\"message\\\":\\\"-85b44fc459-gdk6g in node crc\\\\nI0110 06:47:06.665499 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0110 06:47:06.665490 6435 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:06.665521 6435 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0110 06:47:06.665552 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.949725 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:11 crc kubenswrapper[4810]: I0110 06:47:11.977134 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.000318 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.023936 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:12Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.024932 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.024986 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.025010 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.025039 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.025059 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:12Z","lastTransitionTime":"2026-01-10T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.041074 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:12Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.055374 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:12Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.128896 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.129210 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.129332 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.129445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.129549 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:12Z","lastTransitionTime":"2026-01-10T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.233362 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.233423 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.233447 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.233474 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.233734 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:12Z","lastTransitionTime":"2026-01-10T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.337396 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.337779 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.337946 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.338096 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.338300 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:12Z","lastTransitionTime":"2026-01-10T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.441462 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.441891 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.442091 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.442305 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.442469 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:12Z","lastTransitionTime":"2026-01-10T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.545024 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.545353 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.545526 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.545724 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.545897 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:12Z","lastTransitionTime":"2026-01-10T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.648658 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.648984 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.649175 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.649380 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.649512 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:12Z","lastTransitionTime":"2026-01-10T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.692373 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.692377 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.692491 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:12 crc kubenswrapper[4810]: E0110 06:47:12.692900 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:12 crc kubenswrapper[4810]: E0110 06:47:12.693049 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:12 crc kubenswrapper[4810]: E0110 06:47:12.693259 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.752073 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.752139 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.752165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.752231 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.752257 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:12Z","lastTransitionTime":"2026-01-10T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.855031 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.855101 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.855124 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.855154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.855181 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:12Z","lastTransitionTime":"2026-01-10T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.957312 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.957372 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.957388 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.957412 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:12 crc kubenswrapper[4810]: I0110 06:47:12.957429 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:12Z","lastTransitionTime":"2026-01-10T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.060328 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.060379 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.060391 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.060414 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.060432 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:13Z","lastTransitionTime":"2026-01-10T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.163297 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.163355 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.163372 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.163398 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.163415 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:13Z","lastTransitionTime":"2026-01-10T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.266667 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.266742 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.266766 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.266797 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.266827 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:13Z","lastTransitionTime":"2026-01-10T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.369965 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.370044 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.370068 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.370100 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.370122 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:13Z","lastTransitionTime":"2026-01-10T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.473102 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.473139 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.473150 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.473162 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.473171 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:13Z","lastTransitionTime":"2026-01-10T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.576499 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.576571 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.576594 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.576624 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.576646 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:13Z","lastTransitionTime":"2026-01-10T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.679491 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.679785 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.679927 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.680075 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.680180 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:13Z","lastTransitionTime":"2026-01-10T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.691956 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:13 crc kubenswrapper[4810]: E0110 06:47:13.692163 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.783168 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.783281 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.783299 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.783322 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.783340 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:13Z","lastTransitionTime":"2026-01-10T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.886670 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.886714 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.886730 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.886751 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.886768 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:13Z","lastTransitionTime":"2026-01-10T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.989311 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.989382 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.989399 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.989421 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:13 crc kubenswrapper[4810]: I0110 06:47:13.989437 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:13Z","lastTransitionTime":"2026-01-10T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.092692 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.092761 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.092784 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.092813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.092834 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:14Z","lastTransitionTime":"2026-01-10T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.196161 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.196260 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.196287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.196321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.196346 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:14Z","lastTransitionTime":"2026-01-10T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.299420 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.299733 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.299889 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.300022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.300152 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:14Z","lastTransitionTime":"2026-01-10T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.402981 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.403051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.403068 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.403094 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.403111 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:14Z","lastTransitionTime":"2026-01-10T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.506091 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.506149 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.506165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.506189 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.506289 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:14Z","lastTransitionTime":"2026-01-10T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.608991 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.609064 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.609089 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.609118 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.609140 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:14Z","lastTransitionTime":"2026-01-10T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.692454 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.692498 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.692728 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:14 crc kubenswrapper[4810]: E0110 06:47:14.692723 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:14 crc kubenswrapper[4810]: E0110 06:47:14.693067 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:14 crc kubenswrapper[4810]: E0110 06:47:14.692925 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.712152 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.712253 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.712278 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.712315 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.712339 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:14Z","lastTransitionTime":"2026-01-10T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.815868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.815934 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.815955 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.815986 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.816011 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:14Z","lastTransitionTime":"2026-01-10T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.919258 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.919350 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.919359 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.919372 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:14 crc kubenswrapper[4810]: I0110 06:47:14.919382 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:14Z","lastTransitionTime":"2026-01-10T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.022026 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.022083 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.022100 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.022124 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.022143 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:15Z","lastTransitionTime":"2026-01-10T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.125054 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.125139 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.125171 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.125238 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.125265 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:15Z","lastTransitionTime":"2026-01-10T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.227879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.227926 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.227943 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.227968 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.227989 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:15Z","lastTransitionTime":"2026-01-10T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.330837 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.330889 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.330901 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.330917 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.330934 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:15Z","lastTransitionTime":"2026-01-10T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.433813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.433844 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.433852 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.433864 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.433872 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:15Z","lastTransitionTime":"2026-01-10T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.536321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.536370 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.536381 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.536398 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.536409 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:15Z","lastTransitionTime":"2026-01-10T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.639357 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.639397 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.639410 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.639427 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.639441 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:15Z","lastTransitionTime":"2026-01-10T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.692343 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:15 crc kubenswrapper[4810]: E0110 06:47:15.692608 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.741922 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.741964 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.741976 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.741992 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.742003 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:15Z","lastTransitionTime":"2026-01-10T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.843688 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.843735 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.843749 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.843767 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.843782 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:15Z","lastTransitionTime":"2026-01-10T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.947135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.947159 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.947167 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.947179 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:15 crc kubenswrapper[4810]: I0110 06:47:15.947186 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:15Z","lastTransitionTime":"2026-01-10T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.049050 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.049098 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.049108 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.049120 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.049128 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:16Z","lastTransitionTime":"2026-01-10T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.151204 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.151246 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.151256 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.151287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.151296 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:16Z","lastTransitionTime":"2026-01-10T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.253076 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.253115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.253127 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.253141 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.253151 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:16Z","lastTransitionTime":"2026-01-10T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.355462 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.355517 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.355541 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.355567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.355590 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:16Z","lastTransitionTime":"2026-01-10T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.385347 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:16 crc kubenswrapper[4810]: E0110 06:47:16.385576 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:47:16 crc kubenswrapper[4810]: E0110 06:47:16.385674 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs podName:6741fd18-31c0-4bc3-be74-c0f6080c67af nodeName:}" failed. No retries permitted until 2026-01-10 06:47:48.385648862 +0000 UTC m=+97.001141815 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs") pod "network-metrics-daemon-9nv84" (UID: "6741fd18-31c0-4bc3-be74-c0f6080c67af") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.458160 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.458220 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.458229 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.458246 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.458257 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:16Z","lastTransitionTime":"2026-01-10T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.561623 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.561654 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.561664 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.561676 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.561686 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:16Z","lastTransitionTime":"2026-01-10T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.664015 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.664042 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.664051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.664064 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.664072 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:16Z","lastTransitionTime":"2026-01-10T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.692813 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.692840 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.692813 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:16 crc kubenswrapper[4810]: E0110 06:47:16.692949 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:16 crc kubenswrapper[4810]: E0110 06:47:16.693018 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:16 crc kubenswrapper[4810]: E0110 06:47:16.693146 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.767290 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.767338 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.767351 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.767372 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.767388 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:16Z","lastTransitionTime":"2026-01-10T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.869882 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.869924 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.869936 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.869956 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.869966 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:16Z","lastTransitionTime":"2026-01-10T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.972144 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.972187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.972218 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.972237 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:16 crc kubenswrapper[4810]: I0110 06:47:16.972249 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:16Z","lastTransitionTime":"2026-01-10T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.075170 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.075229 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.075242 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.075260 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.075272 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:17Z","lastTransitionTime":"2026-01-10T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.178797 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.178853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.178870 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.178896 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.178916 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:17Z","lastTransitionTime":"2026-01-10T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.281729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.281807 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.281829 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.281858 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.281876 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:17Z","lastTransitionTime":"2026-01-10T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.386161 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.386289 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.386317 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.386354 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.386382 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:17Z","lastTransitionTime":"2026-01-10T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.489670 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.489725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.489738 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.489757 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.489774 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:17Z","lastTransitionTime":"2026-01-10T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.592132 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.592184 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.592214 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.592234 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.592249 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:17Z","lastTransitionTime":"2026-01-10T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.692540 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:17 crc kubenswrapper[4810]: E0110 06:47:17.692754 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.695264 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.695300 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.695313 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.695330 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.695343 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:17Z","lastTransitionTime":"2026-01-10T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.797908 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.797948 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.797962 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.797980 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.797994 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:17Z","lastTransitionTime":"2026-01-10T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.900759 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.901032 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.901097 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.901246 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:17 crc kubenswrapper[4810]: I0110 06:47:17.901333 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:17Z","lastTransitionTime":"2026-01-10T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.004374 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.005810 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.005927 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.006040 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.006161 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:18Z","lastTransitionTime":"2026-01-10T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.109007 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.109052 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.109067 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.109088 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.109103 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:18Z","lastTransitionTime":"2026-01-10T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.211340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.211662 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.211805 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.211937 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.212074 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:18Z","lastTransitionTime":"2026-01-10T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.314919 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.315135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.315236 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.315438 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.315534 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:18Z","lastTransitionTime":"2026-01-10T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.417901 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.417927 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.417938 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.417951 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.417961 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:18Z","lastTransitionTime":"2026-01-10T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.520295 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.520508 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.520601 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.520695 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.520777 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:18Z","lastTransitionTime":"2026-01-10T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.622621 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.622661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.622671 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.622686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.622698 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:18Z","lastTransitionTime":"2026-01-10T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.692090 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.692136 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:18 crc kubenswrapper[4810]: E0110 06:47:18.692503 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:18 crc kubenswrapper[4810]: E0110 06:47:18.692433 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.692157 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:18 crc kubenswrapper[4810]: E0110 06:47:18.692751 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.724691 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.724732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.724741 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.724756 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.724764 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:18Z","lastTransitionTime":"2026-01-10T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.826895 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.826934 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.826947 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.826963 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.826976 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:18Z","lastTransitionTime":"2026-01-10T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.929024 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.929072 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.929081 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.929097 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:18 crc kubenswrapper[4810]: I0110 06:47:18.929107 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:18Z","lastTransitionTime":"2026-01-10T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.031421 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.031481 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.031498 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.031522 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.031539 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:19Z","lastTransitionTime":"2026-01-10T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.134510 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.134559 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.134576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.134597 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.134615 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:19Z","lastTransitionTime":"2026-01-10T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.237316 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.237562 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.237626 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.237689 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.237748 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:19Z","lastTransitionTime":"2026-01-10T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.241641 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7gh2_34d87e8a-cdfb-46ed-97db-2d07cffec516/kube-multus/0.log" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.241687 4810 generic.go:334] "Generic (PLEG): container finished" podID="34d87e8a-cdfb-46ed-97db-2d07cffec516" containerID="90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae" exitCode=1 Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.241726 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7gh2" event={"ID":"34d87e8a-cdfb-46ed-97db-2d07cffec516","Type":"ContainerDied","Data":"90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae"} Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.242151 4810 scope.go:117] "RemoveContainer" containerID="90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.256960 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.271484 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.288227 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.299930 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.317663 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.333741 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.340124 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.340249 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.340328 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.340405 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.340477 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:19Z","lastTransitionTime":"2026-01-10T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.348751 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.358991 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.371156 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.381801 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:18Z\\\",\\\"message\\\":\\\"2026-01-10T06:46:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43\\\\n2026-01-10T06:46:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43 to /host/opt/cni/bin/\\\\n2026-01-10T06:46:33Z [verbose] multus-daemon started\\\\n2026-01-10T06:46:33Z [verbose] Readiness Indicator file check\\\\n2026-01-10T06:47:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.395797 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.423077 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:07Z\\\",\\\"message\\\":\\\"-85b44fc459-gdk6g in node crc\\\\nI0110 06:47:06.665499 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0110 06:47:06.665490 6435 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:06.665521 6435 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0110 06:47:06.665552 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.438864 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.443254 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.443319 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.443339 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.443365 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.443382 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:19Z","lastTransitionTime":"2026-01-10T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.460797 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.474462 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.484836 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.495658 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.504734 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:19Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.546446 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.546489 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.546500 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.546517 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.546528 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:19Z","lastTransitionTime":"2026-01-10T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.648498 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.648529 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.648536 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.648550 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.648558 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:19Z","lastTransitionTime":"2026-01-10T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.693226 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:19 crc kubenswrapper[4810]: E0110 06:47:19.693609 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.693763 4810 scope.go:117] "RemoveContainer" containerID="7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2" Jan 10 06:47:19 crc kubenswrapper[4810]: E0110 06:47:19.693913 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.750600 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.750649 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.750657 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.750672 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.750681 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:19Z","lastTransitionTime":"2026-01-10T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.853746 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.853800 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.853812 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.853831 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.853840 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:19Z","lastTransitionTime":"2026-01-10T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.956647 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.956686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.956714 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.956732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:19 crc kubenswrapper[4810]: I0110 06:47:19.956743 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:19Z","lastTransitionTime":"2026-01-10T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.059100 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.059149 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.059166 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.059187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.059237 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:20Z","lastTransitionTime":"2026-01-10T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.166895 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.166981 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.166998 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.167022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.167039 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:20Z","lastTransitionTime":"2026-01-10T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.245888 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7gh2_34d87e8a-cdfb-46ed-97db-2d07cffec516/kube-multus/0.log" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.246127 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7gh2" event={"ID":"34d87e8a-cdfb-46ed-97db-2d07cffec516","Type":"ContainerStarted","Data":"41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94"} Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.257856 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.269023 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.269076 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.269094 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.269120 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.269138 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:20Z","lastTransitionTime":"2026-01-10T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.277675 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.298813 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.310708 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.319246 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.333273 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.343365 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.353586 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.364494 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.371255 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.371297 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.371309 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.371325 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.371337 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:20Z","lastTransitionTime":"2026-01-10T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.374409 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.385088 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.398936 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:18Z\\\",\\\"message\\\":\\\"2026-01-10T06:46:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43\\\\n2026-01-10T06:46:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43 to /host/opt/cni/bin/\\\\n2026-01-10T06:46:33Z [verbose] multus-daemon started\\\\n2026-01-10T06:46:33Z [verbose] Readiness Indicator file check\\\\n2026-01-10T06:47:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.412169 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.438176 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:07Z\\\",\\\"message\\\":\\\"-85b44fc459-gdk6g in node crc\\\\nI0110 06:47:06.665499 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0110 06:47:06.665490 6435 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:06.665521 6435 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0110 06:47:06.665552 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.450404 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.462084 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.472059 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.474053 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.474097 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.474115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.474138 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.474154 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:20Z","lastTransitionTime":"2026-01-10T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.482213 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.575921 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.575958 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.575974 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.575994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.576011 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:20Z","lastTransitionTime":"2026-01-10T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.678022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.678476 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.678759 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.679302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.679689 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:20Z","lastTransitionTime":"2026-01-10T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.692810 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:20 crc kubenswrapper[4810]: E0110 06:47:20.692898 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.693041 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:20 crc kubenswrapper[4810]: E0110 06:47:20.693085 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.693759 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:20 crc kubenswrapper[4810]: E0110 06:47:20.694089 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.782516 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.782889 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.783129 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.783383 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.783524 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:20Z","lastTransitionTime":"2026-01-10T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.886602 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.886667 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.886685 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.886711 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.886729 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:20Z","lastTransitionTime":"2026-01-10T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.988600 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.988636 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.988645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.988658 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:20 crc kubenswrapper[4810]: I0110 06:47:20.988668 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:20Z","lastTransitionTime":"2026-01-10T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.091804 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.091874 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.091892 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.091950 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.091969 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.194298 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.194546 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.194612 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.194679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.194738 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.297168 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.297287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.297313 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.297344 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.297366 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.351136 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.351226 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.351251 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.351281 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.351304 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: E0110 06:47:21.364666 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.369168 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.369250 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.369269 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.369291 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.369308 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: E0110 06:47:21.381419 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.385860 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.385932 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.385951 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.385979 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.385995 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: E0110 06:47:21.401389 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.410881 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.410936 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.410952 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.410975 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.410992 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: E0110 06:47:21.424581 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.428384 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.428595 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.428746 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.428869 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.428977 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: E0110 06:47:21.442534 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: E0110 06:47:21.443002 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.444479 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.444609 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.444697 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.444781 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.444863 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.547693 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.547717 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.547727 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.547753 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.547763 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.649891 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.649937 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.649953 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.649973 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.649989 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.692754 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:21 crc kubenswrapper[4810]: E0110 06:47:21.692961 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.721306 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.732889 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.746733 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.752743 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.752780 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.752792 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.752812 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.752824 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.758654 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.769239 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.783258 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.800150 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.812887 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.827728 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.840104 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.852571 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.855011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.855043 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.855051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.855065 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.855075 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.871228 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.892604 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:07Z\\\",\\\"message\\\":\\\"-85b44fc459-gdk6g in node crc\\\\nI0110 06:47:06.665499 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0110 06:47:06.665490 6435 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:06.665521 6435 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0110 06:47:06.665552 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.906175 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.916533 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.928580 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.941177 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.958068 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.958212 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.958236 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.958259 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.958277 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:21Z","lastTransitionTime":"2026-01-10T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:21 crc kubenswrapper[4810]: I0110 06:47:21.958805 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:18Z\\\",\\\"message\\\":\\\"2026-01-10T06:46:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43\\\\n2026-01-10T06:46:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43 to /host/opt/cni/bin/\\\\n2026-01-10T06:46:33Z [verbose] multus-daemon started\\\\n2026-01-10T06:46:33Z [verbose] Readiness Indicator file check\\\\n2026-01-10T06:47:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.060596 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.060636 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.060645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.060659 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.060670 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:22Z","lastTransitionTime":"2026-01-10T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.163888 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.163932 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.163940 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.163956 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.163966 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:22Z","lastTransitionTime":"2026-01-10T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.265729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.265784 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.265799 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.265821 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.265838 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:22Z","lastTransitionTime":"2026-01-10T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.368093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.368136 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.368144 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.368159 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.368169 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:22Z","lastTransitionTime":"2026-01-10T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.470757 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.470793 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.470801 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.470813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.470821 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:22Z","lastTransitionTime":"2026-01-10T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.573659 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.573715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.573733 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.573758 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.573775 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:22Z","lastTransitionTime":"2026-01-10T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.676144 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.676213 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.676227 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.676244 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.676256 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:22Z","lastTransitionTime":"2026-01-10T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.692913 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.692987 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.692989 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:22 crc kubenswrapper[4810]: E0110 06:47:22.693084 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:22 crc kubenswrapper[4810]: E0110 06:47:22.693466 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:22 crc kubenswrapper[4810]: E0110 06:47:22.693573 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.778740 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.778802 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.778819 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.778844 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.778864 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:22Z","lastTransitionTime":"2026-01-10T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.881593 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.881669 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.881696 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.881766 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.881788 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:22Z","lastTransitionTime":"2026-01-10T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.984873 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.984936 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.984958 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.984989 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:22 crc kubenswrapper[4810]: I0110 06:47:22.985031 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:22Z","lastTransitionTime":"2026-01-10T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.087583 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.087652 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.087676 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.087707 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.087725 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:23Z","lastTransitionTime":"2026-01-10T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.190109 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.190153 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.190169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.190185 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.190212 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:23Z","lastTransitionTime":"2026-01-10T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.292133 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.292242 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.292262 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.292286 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.292304 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:23Z","lastTransitionTime":"2026-01-10T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.395095 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.395140 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.395149 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.395166 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.395179 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:23Z","lastTransitionTime":"2026-01-10T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.497406 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.497478 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.497496 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.497521 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.497541 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:23Z","lastTransitionTime":"2026-01-10T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.600444 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.600516 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.600537 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.600567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.600591 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:23Z","lastTransitionTime":"2026-01-10T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.692114 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:23 crc kubenswrapper[4810]: E0110 06:47:23.692302 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.703144 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.703183 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.703213 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.703230 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.703242 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:23Z","lastTransitionTime":"2026-01-10T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.804897 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.804934 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.804947 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.804962 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.804974 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:23Z","lastTransitionTime":"2026-01-10T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.908167 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.908256 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.908296 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.908317 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:23 crc kubenswrapper[4810]: I0110 06:47:23.908333 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:23Z","lastTransitionTime":"2026-01-10T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.011182 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.011281 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.011304 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.011335 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.011361 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:24Z","lastTransitionTime":"2026-01-10T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.113425 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.113500 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.113515 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.113533 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.113545 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:24Z","lastTransitionTime":"2026-01-10T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.216256 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.216303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.216314 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.216330 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.216341 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:24Z","lastTransitionTime":"2026-01-10T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.319528 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.319596 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.319614 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.319640 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.319657 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:24Z","lastTransitionTime":"2026-01-10T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.422741 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.422796 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.422813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.422839 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.422858 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:24Z","lastTransitionTime":"2026-01-10T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.526230 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.526320 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.526344 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.526378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.526398 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:24Z","lastTransitionTime":"2026-01-10T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.629267 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.629329 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.629340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.629381 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.629397 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:24Z","lastTransitionTime":"2026-01-10T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.692355 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.692367 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.692371 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:24 crc kubenswrapper[4810]: E0110 06:47:24.692511 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:24 crc kubenswrapper[4810]: E0110 06:47:24.692614 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:24 crc kubenswrapper[4810]: E0110 06:47:24.692708 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.735623 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.735689 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.735710 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.735737 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.735763 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:24Z","lastTransitionTime":"2026-01-10T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.838433 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.838498 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.838514 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.838909 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.838962 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:24Z","lastTransitionTime":"2026-01-10T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.942078 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.942108 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.942119 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.942133 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:24 crc kubenswrapper[4810]: I0110 06:47:24.942142 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:24Z","lastTransitionTime":"2026-01-10T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.044923 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.044960 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.045018 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.045041 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.045060 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:25Z","lastTransitionTime":"2026-01-10T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.147567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.147602 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.147615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.147634 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.147646 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:25Z","lastTransitionTime":"2026-01-10T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.250496 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.250565 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.250588 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.250617 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.250640 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:25Z","lastTransitionTime":"2026-01-10T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.355182 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.355287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.355309 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.355344 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.355366 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:25Z","lastTransitionTime":"2026-01-10T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.458271 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.458330 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.458341 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.458356 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.458365 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:25Z","lastTransitionTime":"2026-01-10T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.561226 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.561326 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.561346 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.561403 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.561420 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:25Z","lastTransitionTime":"2026-01-10T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.664782 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.664881 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.664899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.664954 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.664971 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:25Z","lastTransitionTime":"2026-01-10T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.692473 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:25 crc kubenswrapper[4810]: E0110 06:47:25.692653 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.768846 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.768945 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.768966 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.768991 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.769047 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:25Z","lastTransitionTime":"2026-01-10T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.872701 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.872809 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.872829 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.872891 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.872910 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:25Z","lastTransitionTime":"2026-01-10T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.976093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.976262 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.976287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.976311 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:25 crc kubenswrapper[4810]: I0110 06:47:25.976329 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:25Z","lastTransitionTime":"2026-01-10T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.078954 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.078984 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.078992 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.079024 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.079035 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:26Z","lastTransitionTime":"2026-01-10T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.181587 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.181630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.181642 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.181660 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.181673 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:26Z","lastTransitionTime":"2026-01-10T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.285576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.285630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.285647 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.285670 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.285687 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:26Z","lastTransitionTime":"2026-01-10T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.388365 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.388408 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.388433 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.388449 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.388461 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:26Z","lastTransitionTime":"2026-01-10T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.491231 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.491308 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.491341 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.491369 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.491390 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:26Z","lastTransitionTime":"2026-01-10T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.597927 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.597992 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.598003 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.598022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.598058 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:26Z","lastTransitionTime":"2026-01-10T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.692044 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.692126 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.692227 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:26 crc kubenswrapper[4810]: E0110 06:47:26.692299 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:26 crc kubenswrapper[4810]: E0110 06:47:26.692148 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:26 crc kubenswrapper[4810]: E0110 06:47:26.692401 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.700944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.701006 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.701016 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.701034 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.701047 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:26Z","lastTransitionTime":"2026-01-10T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.804417 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.804479 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.804495 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.804520 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.804538 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:26Z","lastTransitionTime":"2026-01-10T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.907502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.907588 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.907604 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.907625 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:26 crc kubenswrapper[4810]: I0110 06:47:26.907639 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:26Z","lastTransitionTime":"2026-01-10T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.010666 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.010772 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.010793 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.010821 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.010838 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:27Z","lastTransitionTime":"2026-01-10T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.115589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.115628 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.115645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.115672 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.115686 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:27Z","lastTransitionTime":"2026-01-10T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.218249 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.218332 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.218353 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.218384 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.218406 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:27Z","lastTransitionTime":"2026-01-10T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.322042 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.322076 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.322086 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.322099 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.322107 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:27Z","lastTransitionTime":"2026-01-10T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.424487 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.424519 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.424531 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.424545 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.424558 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:27Z","lastTransitionTime":"2026-01-10T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.527337 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.527602 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.527629 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.527656 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.527677 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:27Z","lastTransitionTime":"2026-01-10T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.630363 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.630430 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.630457 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.630488 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.630514 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:27Z","lastTransitionTime":"2026-01-10T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.693039 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:27 crc kubenswrapper[4810]: E0110 06:47:27.693285 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.733785 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.734100 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.734311 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.734518 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.734722 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:27Z","lastTransitionTime":"2026-01-10T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.838339 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.838414 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.838435 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.838463 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.838484 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:27Z","lastTransitionTime":"2026-01-10T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.941933 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.941981 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.941993 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.942010 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:27 crc kubenswrapper[4810]: I0110 06:47:27.942021 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:27Z","lastTransitionTime":"2026-01-10T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.044633 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.044697 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.044720 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.044748 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.044769 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:28Z","lastTransitionTime":"2026-01-10T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.147549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.147619 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.147637 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.147661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.147678 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:28Z","lastTransitionTime":"2026-01-10T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.250479 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.250527 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.250538 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.250558 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.250570 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:28Z","lastTransitionTime":"2026-01-10T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.353779 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.353826 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.353843 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.353866 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.353882 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:28Z","lastTransitionTime":"2026-01-10T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.456578 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.456629 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.456645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.456667 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.456685 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:28Z","lastTransitionTime":"2026-01-10T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.559232 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.559308 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.559343 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.559382 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.559407 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:28Z","lastTransitionTime":"2026-01-10T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.661853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.661926 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.661950 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.661980 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.662002 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:28Z","lastTransitionTime":"2026-01-10T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.692424 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.692424 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:28 crc kubenswrapper[4810]: E0110 06:47:28.692652 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.692481 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:28 crc kubenswrapper[4810]: E0110 06:47:28.692787 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:28 crc kubenswrapper[4810]: E0110 06:47:28.692937 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.764615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.764679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.764700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.764724 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.764742 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:28Z","lastTransitionTime":"2026-01-10T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.867663 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.867725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.867742 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.867765 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.867782 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:28Z","lastTransitionTime":"2026-01-10T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.970792 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.970850 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.970868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.970894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:28 crc kubenswrapper[4810]: I0110 06:47:28.970911 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:28Z","lastTransitionTime":"2026-01-10T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.073901 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.073971 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.073995 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.074026 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.074049 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:29Z","lastTransitionTime":"2026-01-10T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.177026 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.177088 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.177111 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.177142 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.177169 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:29Z","lastTransitionTime":"2026-01-10T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.279570 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.279630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.279648 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.279673 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.279692 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:29Z","lastTransitionTime":"2026-01-10T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.383304 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.383356 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.383373 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.383400 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.383418 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:29Z","lastTransitionTime":"2026-01-10T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.486797 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.486865 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.486886 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.486913 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.486935 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:29Z","lastTransitionTime":"2026-01-10T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.589844 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.589892 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.589910 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.589934 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.589950 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:29Z","lastTransitionTime":"2026-01-10T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.692305 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:29 crc kubenswrapper[4810]: E0110 06:47:29.692589 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.692696 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.692738 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.692757 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.692780 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.692801 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:29Z","lastTransitionTime":"2026-01-10T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.795742 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.795809 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.795828 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.795852 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.795870 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:29Z","lastTransitionTime":"2026-01-10T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.898924 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.898985 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.899003 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.899033 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:29 crc kubenswrapper[4810]: I0110 06:47:29.899052 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:29Z","lastTransitionTime":"2026-01-10T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.001882 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.001945 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.001965 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.001994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.002012 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:30Z","lastTransitionTime":"2026-01-10T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.105537 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.105605 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.105628 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.105656 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.105680 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:30Z","lastTransitionTime":"2026-01-10T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.209401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.209481 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.209500 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.209524 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.209540 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:30Z","lastTransitionTime":"2026-01-10T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.312097 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.312275 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.312300 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.312327 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.312347 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:30Z","lastTransitionTime":"2026-01-10T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.415674 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.415743 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.415762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.415823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.415842 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:30Z","lastTransitionTime":"2026-01-10T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.517935 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.517995 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.518011 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.518040 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.518063 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:30Z","lastTransitionTime":"2026-01-10T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.620668 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.620737 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.620755 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.620777 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.620795 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:30Z","lastTransitionTime":"2026-01-10T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.692578 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.692699 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:30 crc kubenswrapper[4810]: E0110 06:47:30.692775 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:30 crc kubenswrapper[4810]: E0110 06:47:30.692947 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.692577 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:30 crc kubenswrapper[4810]: E0110 06:47:30.693089 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.722950 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.723010 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.723031 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.723058 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.723077 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:30Z","lastTransitionTime":"2026-01-10T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.826505 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.826582 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.826605 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.826634 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.826657 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:30Z","lastTransitionTime":"2026-01-10T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.929696 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.929978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.930171 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.930356 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:30 crc kubenswrapper[4810]: I0110 06:47:30.930471 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:30Z","lastTransitionTime":"2026-01-10T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.033580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.033698 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.033722 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.033753 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.033777 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.136259 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.136324 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.136345 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.136368 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.136385 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.238670 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.238735 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.238751 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.238776 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.238793 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.340889 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.340955 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.340976 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.341003 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.341022 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.444054 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.444094 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.444105 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.444125 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.444137 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.547590 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.547658 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.547678 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.547703 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.547725 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.650687 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.651100 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.651299 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.651554 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.651770 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.692986 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:31 crc kubenswrapper[4810]: E0110 06:47:31.693244 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.694502 4810 scope.go:117] "RemoveContainer" containerID="7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.717625 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.717663 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.717679 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.717698 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.717711 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.718576 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: E0110 06:47:31.738488 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.744161 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.744230 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.744245 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.744264 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.744276 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.756493 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:07Z\\\",\\\"message\\\":\\\"-85b44fc459-gdk6g in node crc\\\\nI0110 06:47:06.665499 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0110 06:47:06.665490 6435 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:06.665521 6435 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0110 06:47:06.665552 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: E0110 06:47:31.762656 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.767565 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.767619 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.767639 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.767665 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.767683 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.779430 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: E0110 06:47:31.819485 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.820811 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.823788 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.823844 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.823862 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.823916 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.823935 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.837877 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: E0110 06:47:31.838868 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.848022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.848069 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.848085 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.848104 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.848117 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.856311 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: E0110 06:47:31.866134 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: E0110 06:47:31.866478 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.868097 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.868271 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.868413 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.868513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.868599 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.874758 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:18Z\\\",\\\"message\\\":\\\"2026-01-10T06:46:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43\\\\n2026-01-10T06:46:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43 to /host/opt/cni/bin/\\\\n2026-01-10T06:46:33Z [verbose] multus-daemon started\\\\n2026-01-10T06:46:33Z [verbose] Readiness Indicator file check\\\\n2026-01-10T06:47:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.898720 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.919372 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.939338 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.952314 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.967581 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.971888 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.971947 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.971965 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.971987 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.972004 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:31Z","lastTransitionTime":"2026-01-10T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:31 crc kubenswrapper[4810]: I0110 06:47:31.983890 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.002086 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.020403 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:32Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.034964 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:32Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.046492 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:32Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.059434 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:32Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.074281 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.074486 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.074638 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.074785 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.074916 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:32Z","lastTransitionTime":"2026-01-10T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.177755 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.177816 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.177835 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.177861 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.177878 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:32Z","lastTransitionTime":"2026-01-10T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.281773 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.281841 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.281864 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.281895 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.281918 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:32Z","lastTransitionTime":"2026-01-10T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.385115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.385280 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.385310 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.385340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.385361 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:32Z","lastTransitionTime":"2026-01-10T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.489057 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.489137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.489161 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.489227 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.489252 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:32Z","lastTransitionTime":"2026-01-10T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.591865 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.591938 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.591955 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.591980 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.592000 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:32Z","lastTransitionTime":"2026-01-10T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.692457 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.692551 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.693182 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:32 crc kubenswrapper[4810]: E0110 06:47:32.693330 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:32 crc kubenswrapper[4810]: E0110 06:47:32.693433 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:32 crc kubenswrapper[4810]: E0110 06:47:32.693496 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.695929 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.695962 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.695972 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.695986 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.695998 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:32Z","lastTransitionTime":"2026-01-10T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.702545 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.803774 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.804133 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.804151 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.804257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.804310 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:32Z","lastTransitionTime":"2026-01-10T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.907815 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.907869 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.907886 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.907909 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:32 crc kubenswrapper[4810]: I0110 06:47:32.907926 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:32Z","lastTransitionTime":"2026-01-10T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.011051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.011101 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.011157 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.011185 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.011243 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:33Z","lastTransitionTime":"2026-01-10T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.116169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.116225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.116236 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.116252 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.116266 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:33Z","lastTransitionTime":"2026-01-10T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.219330 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.219393 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.219409 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.219434 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.219448 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:33Z","lastTransitionTime":"2026-01-10T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.296190 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/2.log" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.306122 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8"} Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.306914 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.322990 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.323041 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.323055 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.323075 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.323090 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:33Z","lastTransitionTime":"2026-01-10T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.328923 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.345487 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.358876 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.372510 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:18Z\\\",\\\"message\\\":\\\"2026-01-10T06:46:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43\\\\n2026-01-10T06:46:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43 to /host/opt/cni/bin/\\\\n2026-01-10T06:46:33Z [verbose] multus-daemon started\\\\n2026-01-10T06:46:33Z [verbose] Readiness Indicator file check\\\\n2026-01-10T06:47:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.393076 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.426546 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.426616 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.426632 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.426663 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.426681 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:33Z","lastTransitionTime":"2026-01-10T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.458348 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:07Z\\\",\\\"message\\\":\\\"-85b44fc459-gdk6g in node crc\\\\nI0110 06:47:06.665499 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0110 06:47:06.665490 6435 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:06.665521 6435 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0110 06:47:06.665552 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.480258 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.501316 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.512547 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.523371 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.529796 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.529837 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.529850 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.529868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.529897 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:33Z","lastTransitionTime":"2026-01-10T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.533554 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.543112 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4683c08-cf2f-4254-82cc-6843a69cef7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c71b7e621e2b39c51ab2a8542fd9754b2747b0fb64701ed2d45ba4cc84479b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.554911 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.567824 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.579258 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.593020 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.604012 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.615952 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.628845 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.632795 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.632873 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.632896 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.632921 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.632939 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:33Z","lastTransitionTime":"2026-01-10T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.692754 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.692888 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.736635 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.736696 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.736715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.736741 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.736759 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:33Z","lastTransitionTime":"2026-01-10T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.772740 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.772929 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.772901586 +0000 UTC m=+146.388394509 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.773033 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.773296 4810 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.773401 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.773374867 +0000 UTC m=+146.388867780 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.839593 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.839638 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.839651 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.839667 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.839681 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:33Z","lastTransitionTime":"2026-01-10T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.874118 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.874251 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.874354 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.874383 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.874395 4810 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.874433 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.874462 4810 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.874478 4810 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.874432 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.874444 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.874428851 +0000 UTC m=+146.489921734 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.874582 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.874563004 +0000 UTC m=+146.490055977 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.874704 4810 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:47:33 crc kubenswrapper[4810]: E0110 06:47:33.874817 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.87479159 +0000 UTC m=+146.490284523 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.942282 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.942343 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.942365 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.942390 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:33 crc kubenswrapper[4810]: I0110 06:47:33.942407 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:33Z","lastTransitionTime":"2026-01-10T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.046375 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.046447 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.046470 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.046497 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.046519 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:34Z","lastTransitionTime":"2026-01-10T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.149312 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.149369 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.149385 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.149410 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.149427 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:34Z","lastTransitionTime":"2026-01-10T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.252274 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.252345 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.252364 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.252391 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.252409 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:34Z","lastTransitionTime":"2026-01-10T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.313033 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/3.log" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.314236 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/2.log" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.319430 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" exitCode=1 Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.319497 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8"} Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.319555 4810 scope.go:117] "RemoveContainer" containerID="7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.320810 4810 scope.go:117] "RemoveContainer" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:47:34 crc kubenswrapper[4810]: E0110 06:47:34.321274 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.343390 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.356630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.356734 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.356755 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.356819 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.356837 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:34Z","lastTransitionTime":"2026-01-10T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.360570 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.378160 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.394336 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.411252 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.427014 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.447350 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.459154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.459187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.459211 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.459225 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.459234 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:34Z","lastTransitionTime":"2026-01-10T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.461717 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.479039 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:18Z\\\",\\\"message\\\":\\\"2026-01-10T06:46:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43\\\\n2026-01-10T06:46:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43 to /host/opt/cni/bin/\\\\n2026-01-10T06:46:33Z [verbose] multus-daemon started\\\\n2026-01-10T06:46:33Z [verbose] Readiness Indicator file check\\\\n2026-01-10T06:47:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.498661 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.529839 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ebfffa14506dd95ce69c03ee7480200f0843c810fdfec6cf3bf9973443652f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:07Z\\\",\\\"message\\\":\\\"-85b44fc459-gdk6g in node crc\\\\nI0110 06:47:06.665499 6435 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0110 06:47:06.665490 6435 model_client.go:382] Update operations generated as: [{Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:06.665521 6435 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0110 06:47:06.665552 6435 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller init\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:34Z\\\",\\\"message\\\":\\\"ntroller.go:776] Recording success event on pod openshift-multus/multus-t7gh2\\\\nI0110 06:47:33.879634 6822 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lmtrv\\\\nI0110 06:47:33.879373 6822 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879634 6822 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:33.879649 6822 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879658 6822 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0110 06:47:33.879663 6822 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0110 06:47:33.879667 6822 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879538 6822 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.547849 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.561671 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.561719 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.561736 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.561759 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.561779 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:34Z","lastTransitionTime":"2026-01-10T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.565107 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.585510 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.603860 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.619369 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.635044 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4683c08-cf2f-4254-82cc-6843a69cef7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c71b7e621e2b39c51ab2a8542fd9754b2747b0fb64701ed2d45ba4cc84479b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.665702 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.665761 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.665777 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.665801 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.665817 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:34Z","lastTransitionTime":"2026-01-10T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.666618 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.681915 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:34Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.692447 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.692566 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:34 crc kubenswrapper[4810]: E0110 06:47:34.692740 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.692767 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:34 crc kubenswrapper[4810]: E0110 06:47:34.692883 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:34 crc kubenswrapper[4810]: E0110 06:47:34.692999 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.769944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.770006 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.770030 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.770058 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.770078 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:34Z","lastTransitionTime":"2026-01-10T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.873822 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.873901 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.873925 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.873960 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.873985 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:34Z","lastTransitionTime":"2026-01-10T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.976407 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.976622 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.976645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.976667 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:34 crc kubenswrapper[4810]: I0110 06:47:34.976683 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:34Z","lastTransitionTime":"2026-01-10T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.080624 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.080672 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.080690 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.080714 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.080731 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:35Z","lastTransitionTime":"2026-01-10T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.183742 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.183794 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.183810 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.183831 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.183847 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:35Z","lastTransitionTime":"2026-01-10T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.287442 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.287500 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.287517 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.287539 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.287555 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:35Z","lastTransitionTime":"2026-01-10T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.326422 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/3.log" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.331488 4810 scope.go:117] "RemoveContainer" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:47:35 crc kubenswrapper[4810]: E0110 06:47:35.331727 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.362245 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:34Z\\\",\\\"message\\\":\\\"ntroller.go:776] Recording success event on pod openshift-multus/multus-t7gh2\\\\nI0110 06:47:33.879634 6822 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lmtrv\\\\nI0110 06:47:33.879373 6822 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879634 6822 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:33.879649 6822 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879658 6822 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0110 06:47:33.879663 6822 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0110 06:47:33.879667 6822 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879538 6822 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.378855 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.392036 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.392098 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.392118 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.392244 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.392266 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:35Z","lastTransitionTime":"2026-01-10T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.400573 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.419257 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.439944 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.462299 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:18Z\\\",\\\"message\\\":\\\"2026-01-10T06:46:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43\\\\n2026-01-10T06:46:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43 to /host/opt/cni/bin/\\\\n2026-01-10T06:46:33Z [verbose] multus-daemon started\\\\n2026-01-10T06:46:33Z [verbose] Readiness Indicator file check\\\\n2026-01-10T06:47:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.478847 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.494053 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.494105 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.494117 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.494134 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.494147 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:35Z","lastTransitionTime":"2026-01-10T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.495625 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4683c08-cf2f-4254-82cc-6843a69cef7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c71b7e621e2b39c51ab2a8542fd9754b2747b0fb64701ed2d45ba4cc84479b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.527753 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.542059 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.557598 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.571027 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.585138 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.596929 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.596964 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.596976 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.596992 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.597004 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:35Z","lastTransitionTime":"2026-01-10T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.598819 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.611590 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.624567 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.638920 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.652406 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.667980 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:35Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.692584 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:35 crc kubenswrapper[4810]: E0110 06:47:35.692702 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.698385 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.698425 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.698439 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.698455 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.698468 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:35Z","lastTransitionTime":"2026-01-10T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.800863 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.800956 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.800985 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.801019 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.801045 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:35Z","lastTransitionTime":"2026-01-10T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.904524 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.904593 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.904616 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.904649 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:35 crc kubenswrapper[4810]: I0110 06:47:35.904676 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:35Z","lastTransitionTime":"2026-01-10T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.007952 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.008013 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.008038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.008067 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.008087 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:36Z","lastTransitionTime":"2026-01-10T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.110512 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.110571 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.110591 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.110616 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.110634 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:36Z","lastTransitionTime":"2026-01-10T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.213409 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.213476 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.213493 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.213521 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.213540 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:36Z","lastTransitionTime":"2026-01-10T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.316016 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.316061 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.316084 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.316105 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.316119 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:36Z","lastTransitionTime":"2026-01-10T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.419331 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.419416 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.419435 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.419458 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.419476 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:36Z","lastTransitionTime":"2026-01-10T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.522513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.522597 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.522618 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.522644 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.522665 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:36Z","lastTransitionTime":"2026-01-10T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.626443 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.626521 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.626545 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.626578 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.626597 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:36Z","lastTransitionTime":"2026-01-10T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.692928 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.692993 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.692948 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:36 crc kubenswrapper[4810]: E0110 06:47:36.693246 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:36 crc kubenswrapper[4810]: E0110 06:47:36.693348 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:36 crc kubenswrapper[4810]: E0110 06:47:36.693467 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.729947 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.730039 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.730090 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.730116 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.730136 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:36Z","lastTransitionTime":"2026-01-10T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.832216 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.832262 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.832274 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.832291 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.832326 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:36Z","lastTransitionTime":"2026-01-10T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.934694 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.934766 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.934784 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.934810 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:36 crc kubenswrapper[4810]: I0110 06:47:36.934827 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:36Z","lastTransitionTime":"2026-01-10T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.038041 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.038133 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.038158 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.038237 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.038278 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:37Z","lastTransitionTime":"2026-01-10T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.142046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.142111 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.142134 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.142165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.142188 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:37Z","lastTransitionTime":"2026-01-10T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.245283 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.245364 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.245391 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.245423 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.245447 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:37Z","lastTransitionTime":"2026-01-10T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.348105 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.348158 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.348174 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.348230 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.348246 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:37Z","lastTransitionTime":"2026-01-10T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.451342 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.451420 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.451443 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.451474 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.451499 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:37Z","lastTransitionTime":"2026-01-10T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.554091 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.554125 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.554133 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.554146 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.554156 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:37Z","lastTransitionTime":"2026-01-10T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.656754 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.657752 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.657911 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.658121 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.658306 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:37Z","lastTransitionTime":"2026-01-10T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.692606 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:37 crc kubenswrapper[4810]: E0110 06:47:37.692792 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.761904 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.761971 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.761994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.762020 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.762038 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:37Z","lastTransitionTime":"2026-01-10T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.864403 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.864448 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.864483 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.864501 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.864514 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:37Z","lastTransitionTime":"2026-01-10T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.967582 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.967645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.967660 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.967687 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:37 crc kubenswrapper[4810]: I0110 06:47:37.967705 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:37Z","lastTransitionTime":"2026-01-10T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.070145 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.070235 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.070253 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.070276 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.070293 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:38Z","lastTransitionTime":"2026-01-10T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.173283 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.173370 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.173399 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.173433 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.173460 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:38Z","lastTransitionTime":"2026-01-10T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.275912 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.275974 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.275992 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.276020 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.276038 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:38Z","lastTransitionTime":"2026-01-10T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.379027 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.379092 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.379110 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.379134 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.379151 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:38Z","lastTransitionTime":"2026-01-10T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.482748 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.482805 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.482823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.482851 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.482869 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:38Z","lastTransitionTime":"2026-01-10T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.587399 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.587458 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.587481 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.587510 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.587532 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:38Z","lastTransitionTime":"2026-01-10T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.690916 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.690978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.690995 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.691018 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.691034 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:38Z","lastTransitionTime":"2026-01-10T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.692263 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.692311 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:38 crc kubenswrapper[4810]: E0110 06:47:38.692443 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.692470 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:38 crc kubenswrapper[4810]: E0110 06:47:38.692815 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:38 crc kubenswrapper[4810]: E0110 06:47:38.692922 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.793545 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.793599 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.793615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.793637 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.793658 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:38Z","lastTransitionTime":"2026-01-10T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.896717 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.896817 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.896835 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.896899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.896920 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:38Z","lastTransitionTime":"2026-01-10T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:38 crc kubenswrapper[4810]: I0110 06:47:38.999694 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:38.999804 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:38.999876 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:38.999951 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:38.999974 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:38Z","lastTransitionTime":"2026-01-10T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.102984 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.103102 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.103120 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.103145 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.103161 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:39Z","lastTransitionTime":"2026-01-10T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.206287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.206342 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.206359 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.206381 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.206398 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:39Z","lastTransitionTime":"2026-01-10T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.309177 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.309258 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.309281 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.309304 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.309322 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:39Z","lastTransitionTime":"2026-01-10T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.412612 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.412672 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.412690 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.412718 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.412736 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:39Z","lastTransitionTime":"2026-01-10T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.515700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.515758 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.515775 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.515799 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.515823 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:39Z","lastTransitionTime":"2026-01-10T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.618310 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.618374 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.618390 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.618415 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.618433 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:39Z","lastTransitionTime":"2026-01-10T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.693013 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:39 crc kubenswrapper[4810]: E0110 06:47:39.693277 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.721041 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.721098 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.721114 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.721136 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.721154 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:39Z","lastTransitionTime":"2026-01-10T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.832085 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.832131 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.832153 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.832180 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.832240 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:39Z","lastTransitionTime":"2026-01-10T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.935943 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.936014 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.936035 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.936063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:39 crc kubenswrapper[4810]: I0110 06:47:39.936084 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:39Z","lastTransitionTime":"2026-01-10T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.039504 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.039558 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.039576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.039599 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.039615 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:40Z","lastTransitionTime":"2026-01-10T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.143732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.143806 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.143825 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.143855 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.143875 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:40Z","lastTransitionTime":"2026-01-10T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.246658 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.246714 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.246732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.246756 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.246774 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:40Z","lastTransitionTime":"2026-01-10T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.348677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.348736 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.348756 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.348777 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.348789 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:40Z","lastTransitionTime":"2026-01-10T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.451417 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.451515 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.451535 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.451936 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.452257 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:40Z","lastTransitionTime":"2026-01-10T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.555506 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.555572 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.555588 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.555619 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.555634 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:40Z","lastTransitionTime":"2026-01-10T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.658074 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.658119 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.658138 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.658160 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.658176 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:40Z","lastTransitionTime":"2026-01-10T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.692710 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.692815 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:40 crc kubenswrapper[4810]: E0110 06:47:40.693004 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.693055 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:40 crc kubenswrapper[4810]: E0110 06:47:40.693213 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:40 crc kubenswrapper[4810]: E0110 06:47:40.693332 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.761691 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.761824 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.761857 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.761886 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.761907 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:40Z","lastTransitionTime":"2026-01-10T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.866003 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.866037 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.866046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.866060 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.866069 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:40Z","lastTransitionTime":"2026-01-10T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.969821 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.969895 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.969912 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.969939 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:40 crc kubenswrapper[4810]: I0110 06:47:40.969958 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:40Z","lastTransitionTime":"2026-01-10T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.072802 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.072867 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.072887 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.072912 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.072931 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:41Z","lastTransitionTime":"2026-01-10T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.175587 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.175645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.175661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.175686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.175705 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:41Z","lastTransitionTime":"2026-01-10T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.278970 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.279022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.279039 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.279061 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.279076 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:41Z","lastTransitionTime":"2026-01-10T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.393700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.393771 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.393793 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.393823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.393845 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:41Z","lastTransitionTime":"2026-01-10T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.497250 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.497318 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.497339 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.497365 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.497383 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:41Z","lastTransitionTime":"2026-01-10T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.600983 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.601062 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.601082 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.601111 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.601130 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:41Z","lastTransitionTime":"2026-01-10T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.692101 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:41 crc kubenswrapper[4810]: E0110 06:47:41.692346 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.703629 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.703680 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.703699 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.703725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.703744 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:41Z","lastTransitionTime":"2026-01-10T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.713424 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.735503 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.757116 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.777562 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.795387 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.807412 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.807451 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.807485 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.807506 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.807521 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:41Z","lastTransitionTime":"2026-01-10T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.810982 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.827011 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.850702 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:34Z\\\",\\\"message\\\":\\\"ntroller.go:776] Recording success event on pod openshift-multus/multus-t7gh2\\\\nI0110 06:47:33.879634 6822 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lmtrv\\\\nI0110 06:47:33.879373 6822 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879634 6822 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:33.879649 6822 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879658 6822 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0110 06:47:33.879663 6822 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0110 06:47:33.879667 6822 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879538 6822 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.869696 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.884684 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.903545 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.910187 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.910271 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.910296 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.910327 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.910349 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:41Z","lastTransitionTime":"2026-01-10T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.920374 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.939745 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:18Z\\\",\\\"message\\\":\\\"2026-01-10T06:46:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43\\\\n2026-01-10T06:46:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43 to /host/opt/cni/bin/\\\\n2026-01-10T06:46:33Z [verbose] multus-daemon started\\\\n2026-01-10T06:46:33Z [verbose] Readiness Indicator file check\\\\n2026-01-10T06:47:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.955289 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4683c08-cf2f-4254-82cc-6843a69cef7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c71b7e621e2b39c51ab2a8542fd9754b2747b0fb64701ed2d45ba4cc84479b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:41 crc kubenswrapper[4810]: I0110 06:47:41.987036 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.006548 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:42Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.013860 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.013960 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.014020 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.014106 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.014129 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.026477 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:42Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.043475 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:42Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.060585 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:42Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.117150 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.117287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.117314 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.117350 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.117374 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.132274 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.132344 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.132366 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.132398 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.132420 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: E0110 06:47:42.155388 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:42Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.160533 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.160576 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.160599 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.160627 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.160648 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: E0110 06:47:42.182956 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:42Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.188899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.188952 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.188978 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.189009 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.189031 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: E0110 06:47:42.209081 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:42Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.213471 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.213566 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.213596 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.213635 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.213666 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: E0110 06:47:42.233909 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:42Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.238960 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.239035 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.239060 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.239091 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.239117 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: E0110 06:47:42.260677 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:42Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:42 crc kubenswrapper[4810]: E0110 06:47:42.260901 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.263268 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.263365 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.263395 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.263455 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.263485 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.366513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.366615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.366636 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.366674 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.366704 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.470623 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.470696 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.470720 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.470937 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.470964 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.574107 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.574168 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.574185 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.574251 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.574269 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.677451 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.677813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.677986 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.678227 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.678455 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.692027 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.692082 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:42 crc kubenswrapper[4810]: E0110 06:47:42.692269 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.692320 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:42 crc kubenswrapper[4810]: E0110 06:47:42.692528 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:42 crc kubenswrapper[4810]: E0110 06:47:42.692704 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.782999 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.783061 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.783078 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.783105 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.783127 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.886608 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.886666 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.886685 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.886711 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.886731 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.990394 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.990455 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.990473 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.990499 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:42 crc kubenswrapper[4810]: I0110 06:47:42.990518 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:42Z","lastTransitionTime":"2026-01-10T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.093505 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.093839 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.094019 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.094158 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.094475 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:43Z","lastTransitionTime":"2026-01-10T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.198110 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.198169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.198186 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.198244 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.198270 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:43Z","lastTransitionTime":"2026-01-10T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.300826 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.300879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.300899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.300921 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.300939 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:43Z","lastTransitionTime":"2026-01-10T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.403572 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.403636 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.403653 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.403677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.403693 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:43Z","lastTransitionTime":"2026-01-10T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.507178 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.507253 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.507266 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.507287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.507302 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:43Z","lastTransitionTime":"2026-01-10T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.610947 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.611014 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.611028 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.611050 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.611063 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:43Z","lastTransitionTime":"2026-01-10T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.692347 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:43 crc kubenswrapper[4810]: E0110 06:47:43.692540 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.714246 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.714319 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.714341 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.714369 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.714392 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:43Z","lastTransitionTime":"2026-01-10T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.818147 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.818278 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.818302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.818341 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.818359 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:43Z","lastTransitionTime":"2026-01-10T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.921617 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.921681 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.921697 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.921723 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:43 crc kubenswrapper[4810]: I0110 06:47:43.921741 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:43Z","lastTransitionTime":"2026-01-10T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.024100 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.024166 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.024183 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.024247 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.024267 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:44Z","lastTransitionTime":"2026-01-10T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.127463 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.127535 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.127560 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.127590 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.127611 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:44Z","lastTransitionTime":"2026-01-10T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.230729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.230799 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.230821 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.230847 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.230865 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:44Z","lastTransitionTime":"2026-01-10T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.334241 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.334321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.334346 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.334378 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.334396 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:44Z","lastTransitionTime":"2026-01-10T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.437359 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.437410 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.437426 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.437451 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.437469 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:44Z","lastTransitionTime":"2026-01-10T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.541392 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.541463 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.541480 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.541503 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.541521 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:44Z","lastTransitionTime":"2026-01-10T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.645620 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.645718 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.645779 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.645806 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.645861 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:44Z","lastTransitionTime":"2026-01-10T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.691991 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.692059 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:44 crc kubenswrapper[4810]: E0110 06:47:44.692150 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.691989 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:44 crc kubenswrapper[4810]: E0110 06:47:44.692395 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:44 crc kubenswrapper[4810]: E0110 06:47:44.692482 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.748827 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.748900 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.748918 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.748944 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.748962 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:44Z","lastTransitionTime":"2026-01-10T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.851529 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.851590 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.851613 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.851643 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.851665 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:44Z","lastTransitionTime":"2026-01-10T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.954761 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.954837 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.954860 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.954897 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:44 crc kubenswrapper[4810]: I0110 06:47:44.954917 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:44Z","lastTransitionTime":"2026-01-10T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.059107 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.059252 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.059276 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.059314 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.059336 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:45Z","lastTransitionTime":"2026-01-10T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.162751 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.162802 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.162823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.162850 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.162871 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:45Z","lastTransitionTime":"2026-01-10T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.266075 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.266176 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.266220 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.266242 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.266258 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:45Z","lastTransitionTime":"2026-01-10T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.368824 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.368899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.368927 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.368958 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.368979 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:45Z","lastTransitionTime":"2026-01-10T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.471360 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.471417 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.471503 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.471535 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.471553 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:45Z","lastTransitionTime":"2026-01-10T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.574476 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.574537 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.574559 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.574588 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.574611 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:45Z","lastTransitionTime":"2026-01-10T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.677945 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.678022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.678051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.678082 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.678104 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:45Z","lastTransitionTime":"2026-01-10T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.692483 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:45 crc kubenswrapper[4810]: E0110 06:47:45.692666 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.781574 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.781639 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.781659 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.781684 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.781701 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:45Z","lastTransitionTime":"2026-01-10T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.884916 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.884994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.885021 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.885051 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.885070 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:45Z","lastTransitionTime":"2026-01-10T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.987556 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.987625 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.987648 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.987676 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:45 crc kubenswrapper[4810]: I0110 06:47:45.987698 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:45Z","lastTransitionTime":"2026-01-10T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.090529 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.090648 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.090667 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.090723 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.090744 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:46Z","lastTransitionTime":"2026-01-10T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.193732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.193782 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.193798 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.193819 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.193832 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:46Z","lastTransitionTime":"2026-01-10T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.296149 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.296249 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.296274 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.296302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.296319 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:46Z","lastTransitionTime":"2026-01-10T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.399442 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.399522 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.399547 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.399579 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.399602 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:46Z","lastTransitionTime":"2026-01-10T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.502172 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.502232 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.502246 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.502262 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.502394 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:46Z","lastTransitionTime":"2026-01-10T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.605243 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.605312 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.605327 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.605346 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.605357 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:46Z","lastTransitionTime":"2026-01-10T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.692713 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.692806 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.692738 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:46 crc kubenswrapper[4810]: E0110 06:47:46.693089 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:46 crc kubenswrapper[4810]: E0110 06:47:46.693871 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:46 crc kubenswrapper[4810]: E0110 06:47:46.694240 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.708301 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.708354 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.708370 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.708393 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.708409 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:46Z","lastTransitionTime":"2026-01-10T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.811002 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.811062 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.811080 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.811105 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.811123 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:46Z","lastTransitionTime":"2026-01-10T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.914373 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.914435 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.914453 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.914478 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:46 crc kubenswrapper[4810]: I0110 06:47:46.914495 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:46Z","lastTransitionTime":"2026-01-10T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.017456 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.017517 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.017534 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.017561 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.017584 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:47Z","lastTransitionTime":"2026-01-10T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.119921 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.119984 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.120002 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.120027 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.120043 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:47Z","lastTransitionTime":"2026-01-10T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.223504 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.223568 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.223590 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.223616 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.223633 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:47Z","lastTransitionTime":"2026-01-10T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.326373 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.326505 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.326525 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.326555 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.326577 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:47Z","lastTransitionTime":"2026-01-10T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.428579 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.428640 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.428657 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.428681 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.428722 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:47Z","lastTransitionTime":"2026-01-10T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.532688 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.532777 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.532804 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.532838 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.532861 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:47Z","lastTransitionTime":"2026-01-10T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.635757 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.635828 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.635851 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.635880 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.635902 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:47Z","lastTransitionTime":"2026-01-10T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.692847 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:47 crc kubenswrapper[4810]: E0110 06:47:47.693099 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.738096 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.738154 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.738176 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.738232 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.738255 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:47Z","lastTransitionTime":"2026-01-10T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.840774 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.840832 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.840850 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.840874 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.840892 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:47Z","lastTransitionTime":"2026-01-10T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.943936 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.944012 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.944030 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.944057 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:47 crc kubenswrapper[4810]: I0110 06:47:47.944080 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:47Z","lastTransitionTime":"2026-01-10T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.045929 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.046171 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.046185 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.046234 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.046251 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:48Z","lastTransitionTime":"2026-01-10T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.149399 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.149474 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.149497 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.149524 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.149541 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:48Z","lastTransitionTime":"2026-01-10T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.253701 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.253786 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.253810 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.253839 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.253869 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:48Z","lastTransitionTime":"2026-01-10T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.357059 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.357123 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.357142 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.357167 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.357185 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:48Z","lastTransitionTime":"2026-01-10T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.437059 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:48 crc kubenswrapper[4810]: E0110 06:47:48.437347 4810 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:47:48 crc kubenswrapper[4810]: E0110 06:47:48.437461 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs podName:6741fd18-31c0-4bc3-be74-c0f6080c67af nodeName:}" failed. No retries permitted until 2026-01-10 06:48:52.437432226 +0000 UTC m=+161.052925139 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs") pod "network-metrics-daemon-9nv84" (UID: "6741fd18-31c0-4bc3-be74-c0f6080c67af") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.461056 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.461120 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.461145 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.461188 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.461254 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:48Z","lastTransitionTime":"2026-01-10T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.564233 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.564272 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.564284 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.564299 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.564313 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:48Z","lastTransitionTime":"2026-01-10T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.666975 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.667022 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.667038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.667058 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.667074 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:48Z","lastTransitionTime":"2026-01-10T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.692399 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:48 crc kubenswrapper[4810]: E0110 06:47:48.692493 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.692620 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:48 crc kubenswrapper[4810]: E0110 06:47:48.692849 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.693560 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:48 crc kubenswrapper[4810]: E0110 06:47:48.693712 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.694086 4810 scope.go:117] "RemoveContainer" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:47:48 crc kubenswrapper[4810]: E0110 06:47:48.694526 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.770092 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.770142 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.770153 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.770168 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.770179 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:48Z","lastTransitionTime":"2026-01-10T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.872412 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.872474 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.872524 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.872550 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.872568 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:48Z","lastTransitionTime":"2026-01-10T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.975675 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.975732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.975749 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.975772 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:48 crc kubenswrapper[4810]: I0110 06:47:48.975793 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:48Z","lastTransitionTime":"2026-01-10T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.078758 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.078831 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.078856 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.078888 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.078915 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:49Z","lastTransitionTime":"2026-01-10T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.182138 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.182189 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.182232 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.182257 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.182274 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:49Z","lastTransitionTime":"2026-01-10T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.285415 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.285543 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.285565 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.285589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.285605 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:49Z","lastTransitionTime":"2026-01-10T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.387968 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.388056 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.388086 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.388118 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.388142 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:49Z","lastTransitionTime":"2026-01-10T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.491365 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.491447 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.491470 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.491502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.491526 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:49Z","lastTransitionTime":"2026-01-10T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.594853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.594902 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.594913 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.594929 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.594944 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:49Z","lastTransitionTime":"2026-01-10T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.692535 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:49 crc kubenswrapper[4810]: E0110 06:47:49.692777 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.698724 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.698781 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.698799 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.698825 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.698844 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:49Z","lastTransitionTime":"2026-01-10T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.800761 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.800818 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.800837 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.800862 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.800878 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:49Z","lastTransitionTime":"2026-01-10T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.903732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.903813 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.903838 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.903868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:49 crc kubenswrapper[4810]: I0110 06:47:49.903894 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:49Z","lastTransitionTime":"2026-01-10T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.007342 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.007432 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.007457 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.007488 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.007527 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:50Z","lastTransitionTime":"2026-01-10T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.110841 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.110887 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.110904 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.110926 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.110942 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:50Z","lastTransitionTime":"2026-01-10T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.214452 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.214521 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.214546 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.214575 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.214597 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:50Z","lastTransitionTime":"2026-01-10T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.317631 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.317687 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.317708 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.317732 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.317748 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:50Z","lastTransitionTime":"2026-01-10T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.420647 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.420710 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.420727 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.420750 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.420768 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:50Z","lastTransitionTime":"2026-01-10T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.522967 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.523012 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.523024 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.523042 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.523055 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:50Z","lastTransitionTime":"2026-01-10T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.625629 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.625689 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.625699 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.625718 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.625727 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:50Z","lastTransitionTime":"2026-01-10T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.692562 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.692621 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.692657 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:50 crc kubenswrapper[4810]: E0110 06:47:50.692832 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:50 crc kubenswrapper[4810]: E0110 06:47:50.692910 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:50 crc kubenswrapper[4810]: E0110 06:47:50.693016 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.728085 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.728149 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.728172 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.728238 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.728263 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:50Z","lastTransitionTime":"2026-01-10T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.831095 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.831174 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.831231 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.831258 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.831276 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:50Z","lastTransitionTime":"2026-01-10T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.934762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.934823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.934872 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.934911 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:50 crc kubenswrapper[4810]: I0110 06:47:50.934936 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:50Z","lastTransitionTime":"2026-01-10T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.037443 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.037501 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.037513 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.037530 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.037543 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:51Z","lastTransitionTime":"2026-01-10T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.141328 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.141401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.141444 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.141475 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.141502 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:51Z","lastTransitionTime":"2026-01-10T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.245842 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.245932 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.245959 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.245990 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.246022 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:51Z","lastTransitionTime":"2026-01-10T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.350184 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.350292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.350319 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.350349 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.350368 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:51Z","lastTransitionTime":"2026-01-10T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.453309 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.453384 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.453473 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.453502 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.453526 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:51Z","lastTransitionTime":"2026-01-10T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.556038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.556110 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.556134 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.556163 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.556186 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:51Z","lastTransitionTime":"2026-01-10T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.658557 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.658615 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.658635 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.658659 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.658676 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:51Z","lastTransitionTime":"2026-01-10T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.692464 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:51 crc kubenswrapper[4810]: E0110 06:47:51.692676 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.710063 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.731348 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.753461 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.761572 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.761631 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.761650 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.761673 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.761689 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:51Z","lastTransitionTime":"2026-01-10T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.770355 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.788362 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.809978 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.824953 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.841417 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.854402 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.864601 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.864636 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.864647 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.864666 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.864680 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:51Z","lastTransitionTime":"2026-01-10T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.867415 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.880173 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.900393 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:18Z\\\",\\\"message\\\":\\\"2026-01-10T06:46:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43\\\\n2026-01-10T06:46:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43 to /host/opt/cni/bin/\\\\n2026-01-10T06:46:33Z [verbose] multus-daemon started\\\\n2026-01-10T06:46:33Z [verbose] Readiness Indicator file check\\\\n2026-01-10T06:47:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.922093 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.953699 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:34Z\\\",\\\"message\\\":\\\"ntroller.go:776] Recording success event on pod openshift-multus/multus-t7gh2\\\\nI0110 06:47:33.879634 6822 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lmtrv\\\\nI0110 06:47:33.879373 6822 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879634 6822 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:33.879649 6822 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879658 6822 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0110 06:47:33.879663 6822 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0110 06:47:33.879667 6822 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879538 6822 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.967658 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.967699 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.967716 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.967741 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.967760 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:51Z","lastTransitionTime":"2026-01-10T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:51 crc kubenswrapper[4810]: I0110 06:47:51.971567 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4683c08-cf2f-4254-82cc-6843a69cef7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c71b7e621e2b39c51ab2a8542fd9754b2747b0fb64701ed2d45ba4cc84479b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.006020 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.030479 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.051262 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.067318 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.070436 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.070494 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.070518 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.070551 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.070575 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.174437 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.174531 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.174554 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.174592 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.174612 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.278618 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.278686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.278710 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.278749 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.278772 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.369743 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.369809 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.369827 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.369852 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.369870 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: E0110 06:47:52.389279 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.394407 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.394460 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.394478 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.394504 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.394527 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: E0110 06:47:52.413972 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.418637 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.418734 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.418755 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.418778 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.418793 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: E0110 06:47:52.436947 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.441130 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.441167 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.441228 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.441253 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.441267 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: E0110 06:47:52.457592 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.462344 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.462405 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.462425 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.462452 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.462470 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: E0110 06:47:52.481338 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:47:52Z is after 2025-08-24T17:21:41Z" Jan 10 06:47:52 crc kubenswrapper[4810]: E0110 06:47:52.481476 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.483104 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.483141 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.483152 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.483169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.483182 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.586172 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.586265 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.586285 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.586313 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.586337 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.689734 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.689816 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.689844 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.689873 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.689891 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.692142 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.692181 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.692157 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:52 crc kubenswrapper[4810]: E0110 06:47:52.692328 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:52 crc kubenswrapper[4810]: E0110 06:47:52.692486 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:52 crc kubenswrapper[4810]: E0110 06:47:52.692608 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.792457 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.792512 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.792530 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.792552 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.792569 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.894661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.894709 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.894724 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.894740 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.894752 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.997712 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.997777 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.997793 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.997812 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:52 crc kubenswrapper[4810]: I0110 06:47:52.997824 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:52Z","lastTransitionTime":"2026-01-10T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.100585 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.100631 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.100640 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.100656 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.100666 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:53Z","lastTransitionTime":"2026-01-10T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.203838 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.203901 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.203921 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.203948 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.203970 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:53Z","lastTransitionTime":"2026-01-10T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.306607 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.306642 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.306651 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.306668 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.306677 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:53Z","lastTransitionTime":"2026-01-10T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.408733 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.408775 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.408793 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.408812 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.408823 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:53Z","lastTransitionTime":"2026-01-10T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.511516 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.511578 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.511596 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.511619 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.511636 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:53Z","lastTransitionTime":"2026-01-10T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.613830 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.613866 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.613877 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.613894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.613904 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:53Z","lastTransitionTime":"2026-01-10T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.693072 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:53 crc kubenswrapper[4810]: E0110 06:47:53.693317 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.717354 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.717407 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.717420 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.717439 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.717451 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:53Z","lastTransitionTime":"2026-01-10T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.819979 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.820334 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.820438 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.820549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.820644 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:53Z","lastTransitionTime":"2026-01-10T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.923124 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.923170 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.923180 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.923221 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:53 crc kubenswrapper[4810]: I0110 06:47:53.923234 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:53Z","lastTransitionTime":"2026-01-10T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.026728 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.026802 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.026822 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.026852 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.026922 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:54Z","lastTransitionTime":"2026-01-10T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.130820 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.130908 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.130931 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.130971 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.130997 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:54Z","lastTransitionTime":"2026-01-10T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.235559 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.235614 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.235628 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.235649 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.235663 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:54Z","lastTransitionTime":"2026-01-10T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.338807 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.338939 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.338966 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.338994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.339016 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:54Z","lastTransitionTime":"2026-01-10T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.441284 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.441333 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.441345 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.441360 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.441370 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:54Z","lastTransitionTime":"2026-01-10T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.544596 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.544649 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.544663 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.544680 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.544691 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:54Z","lastTransitionTime":"2026-01-10T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.647465 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.647503 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.647515 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.647532 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.647542 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:54Z","lastTransitionTime":"2026-01-10T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.692815 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.692930 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:54 crc kubenswrapper[4810]: E0110 06:47:54.692965 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.693032 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:54 crc kubenswrapper[4810]: E0110 06:47:54.693177 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:54 crc kubenswrapper[4810]: E0110 06:47:54.693665 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.750407 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.750459 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.750473 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.750494 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.750506 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:54Z","lastTransitionTime":"2026-01-10T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.854786 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.854866 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.854899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.854933 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.854954 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:54Z","lastTransitionTime":"2026-01-10T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.957443 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.957507 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.957528 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.957558 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:54 crc kubenswrapper[4810]: I0110 06:47:54.957580 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:54Z","lastTransitionTime":"2026-01-10T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.059890 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.059963 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.059983 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.060008 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.060028 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:55Z","lastTransitionTime":"2026-01-10T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.163117 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.163171 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.163182 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.163221 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.163234 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:55Z","lastTransitionTime":"2026-01-10T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.266060 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.266276 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.266308 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.266350 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.266375 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:55Z","lastTransitionTime":"2026-01-10T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.370946 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.371019 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.371039 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.371066 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.371090 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:55Z","lastTransitionTime":"2026-01-10T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.474464 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.474533 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.474552 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.474580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.474600 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:55Z","lastTransitionTime":"2026-01-10T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.577591 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.577735 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.577758 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.577785 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.577804 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:55Z","lastTransitionTime":"2026-01-10T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.681164 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.681270 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.681289 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.681314 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.681331 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:55Z","lastTransitionTime":"2026-01-10T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.693089 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:55 crc kubenswrapper[4810]: E0110 06:47:55.693324 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.784811 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.784879 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.784899 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.784927 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.784948 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:55Z","lastTransitionTime":"2026-01-10T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.888332 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.888388 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.888404 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.888427 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.888444 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:55Z","lastTransitionTime":"2026-01-10T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.991251 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.991317 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.991342 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.991372 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:55 crc kubenswrapper[4810]: I0110 06:47:55.991399 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:55Z","lastTransitionTime":"2026-01-10T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.095265 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.095330 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.095352 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.095382 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.095405 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:56Z","lastTransitionTime":"2026-01-10T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.198886 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.198936 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.198957 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.198983 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.199004 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:56Z","lastTransitionTime":"2026-01-10T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.302063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.302123 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.302140 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.302163 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.302179 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:56Z","lastTransitionTime":"2026-01-10T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.405594 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.405665 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.405686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.405713 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.405729 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:56Z","lastTransitionTime":"2026-01-10T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.508427 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.508496 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.508518 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.508547 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.508568 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:56Z","lastTransitionTime":"2026-01-10T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.611612 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.611676 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.611694 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.611717 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.611735 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:56Z","lastTransitionTime":"2026-01-10T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.691961 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.692001 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:56 crc kubenswrapper[4810]: E0110 06:47:56.692171 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.692335 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:56 crc kubenswrapper[4810]: E0110 06:47:56.692449 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:56 crc kubenswrapper[4810]: E0110 06:47:56.692859 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.717687 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.717769 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.717789 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.717820 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.717841 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:56Z","lastTransitionTime":"2026-01-10T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.820943 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.821010 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.821029 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.821055 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.821075 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:56Z","lastTransitionTime":"2026-01-10T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.923551 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.923629 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.923652 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.923682 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:56 crc kubenswrapper[4810]: I0110 06:47:56.923699 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:56Z","lastTransitionTime":"2026-01-10T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.027244 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.027311 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.027329 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.027353 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.027369 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:57Z","lastTransitionTime":"2026-01-10T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.130382 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.130429 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.130445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.130467 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.130484 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:57Z","lastTransitionTime":"2026-01-10T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.233024 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.233084 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.233101 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.233124 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.233141 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:57Z","lastTransitionTime":"2026-01-10T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.336274 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.336338 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.336366 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.336397 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.336418 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:57Z","lastTransitionTime":"2026-01-10T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.440001 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.440057 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.440072 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.440093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.440105 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:57Z","lastTransitionTime":"2026-01-10T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.542808 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.542863 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.542883 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.542906 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.542925 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:57Z","lastTransitionTime":"2026-01-10T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.645609 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.645641 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.645651 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.645666 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.645676 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:57Z","lastTransitionTime":"2026-01-10T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.692275 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:57 crc kubenswrapper[4810]: E0110 06:47:57.692517 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.748976 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.749031 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.749044 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.749065 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.749082 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:57Z","lastTransitionTime":"2026-01-10T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.852113 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.852234 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.852267 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.852303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.852330 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:57Z","lastTransitionTime":"2026-01-10T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.956046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.956087 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.956096 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.956111 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:57 crc kubenswrapper[4810]: I0110 06:47:57.956121 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:57Z","lastTransitionTime":"2026-01-10T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.059234 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.059291 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.059309 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.059332 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.059349 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:58Z","lastTransitionTime":"2026-01-10T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.161718 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.161764 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.161776 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.161797 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.161809 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:58Z","lastTransitionTime":"2026-01-10T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.265046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.265115 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.265134 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.265160 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.265177 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:58Z","lastTransitionTime":"2026-01-10T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.368170 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.368273 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.368291 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.368314 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.368332 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:58Z","lastTransitionTime":"2026-01-10T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.471119 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.471174 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.471217 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.471237 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.471249 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:58Z","lastTransitionTime":"2026-01-10T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.574572 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.574635 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.574654 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.574677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.574694 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:58Z","lastTransitionTime":"2026-01-10T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.677721 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.677767 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.677779 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.677797 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.677813 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:58Z","lastTransitionTime":"2026-01-10T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.692612 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.692694 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.692619 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:47:58 crc kubenswrapper[4810]: E0110 06:47:58.692777 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:47:58 crc kubenswrapper[4810]: E0110 06:47:58.692966 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:47:58 crc kubenswrapper[4810]: E0110 06:47:58.693242 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.782594 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.782659 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.782674 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.782700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.782716 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:58Z","lastTransitionTime":"2026-01-10T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.886102 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.886165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.886289 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.886316 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.886333 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:58Z","lastTransitionTime":"2026-01-10T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.989084 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.989146 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.989164 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.989219 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:58 crc kubenswrapper[4810]: I0110 06:47:58.989240 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:58Z","lastTransitionTime":"2026-01-10T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.092418 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.092467 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.092479 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.092499 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.092513 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:59Z","lastTransitionTime":"2026-01-10T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.195176 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.195289 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.195311 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.195335 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.195355 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:59Z","lastTransitionTime":"2026-01-10T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.298645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.298710 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.298727 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.298753 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.298770 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:59Z","lastTransitionTime":"2026-01-10T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.402592 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.402654 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.402666 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.402687 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.402704 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:59Z","lastTransitionTime":"2026-01-10T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.505445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.505518 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.505553 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.505585 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.505607 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:59Z","lastTransitionTime":"2026-01-10T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.608629 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.608684 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.608702 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.608725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.608741 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:59Z","lastTransitionTime":"2026-01-10T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.692706 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:47:59 crc kubenswrapper[4810]: E0110 06:47:59.693000 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.716434 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.716726 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.717557 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.717600 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.717624 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:59Z","lastTransitionTime":"2026-01-10T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.821180 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.821284 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.821304 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.821329 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.821347 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:59Z","lastTransitionTime":"2026-01-10T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.923624 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.923690 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.923711 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.923738 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:47:59 crc kubenswrapper[4810]: I0110 06:47:59.923755 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:47:59Z","lastTransitionTime":"2026-01-10T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.026642 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.026700 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.026718 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.026742 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.026758 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:00Z","lastTransitionTime":"2026-01-10T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.130050 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.130113 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.130137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.130165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.130186 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:00Z","lastTransitionTime":"2026-01-10T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.233708 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.233759 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.233778 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.233819 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.233854 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:00Z","lastTransitionTime":"2026-01-10T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.336719 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.336778 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.336795 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.336821 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.336842 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:00Z","lastTransitionTime":"2026-01-10T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.439093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.439169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.439224 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.439258 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.439282 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:00Z","lastTransitionTime":"2026-01-10T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.542546 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.542589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.542605 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.542630 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.542652 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:00Z","lastTransitionTime":"2026-01-10T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.645063 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.645135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.645159 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.645222 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.645249 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:00Z","lastTransitionTime":"2026-01-10T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.692175 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.692282 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:00 crc kubenswrapper[4810]: E0110 06:48:00.692403 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.692425 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:00 crc kubenswrapper[4810]: E0110 06:48:00.693146 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:00 crc kubenswrapper[4810]: E0110 06:48:00.693348 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.693793 4810 scope.go:117] "RemoveContainer" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:48:00 crc kubenswrapper[4810]: E0110 06:48:00.694282 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.748861 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.748918 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.748936 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.748961 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.748979 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:00Z","lastTransitionTime":"2026-01-10T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.852588 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.852671 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.852696 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.852727 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.852750 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:00Z","lastTransitionTime":"2026-01-10T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.956321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.956400 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.956423 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.956459 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:00 crc kubenswrapper[4810]: I0110 06:48:00.956483 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:00Z","lastTransitionTime":"2026-01-10T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.059725 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.059771 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.059792 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.059818 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.059838 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:01Z","lastTransitionTime":"2026-01-10T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.162947 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.163006 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.163023 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.163046 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.163062 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:01Z","lastTransitionTime":"2026-01-10T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.266932 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.266993 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.267032 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.267067 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.267092 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:01Z","lastTransitionTime":"2026-01-10T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.370594 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.370686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.370715 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.370745 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.370767 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:01Z","lastTransitionTime":"2026-01-10T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.473493 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.473549 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.473561 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.473580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.473596 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:01Z","lastTransitionTime":"2026-01-10T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.577447 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.577531 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.577557 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.577587 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.577614 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:01Z","lastTransitionTime":"2026-01-10T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.681068 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.681116 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.681127 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.681144 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.681155 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:01Z","lastTransitionTime":"2026-01-10T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.692611 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:01 crc kubenswrapper[4810]: E0110 06:48:01.692721 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.703410 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmtrv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"856b0311-fbd9-44d8-ab6a-e5e93843ba75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3deae051df70d6915f0ed2b4675ddd4e27c318639a1c0d42fa4d06e83f245bb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxxlt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmtrv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.719250 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4683c08-cf2f-4254-82cc-6843a69cef7a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c71b7e621e2b39c51ab2a8542fd9754b2747b0fb64701ed2d45ba4cc84479b46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a316d8de5704b55e5cecaca3b5c7252666fc25c949fe17f436d7600181eef566\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.740016 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"00b1005a-8ccb-486a-8b63-63f8ab72f35a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c74e4116670b4b76c0e97da8860f131e35e35654aaa9a6d61dbbbcff8ee709a9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://45844eae3329a6b6216db06693fa46f7535ea675ba355f1a2413b17cf191a64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74430aa7925741254bc91449649fd0ad41bc99f6cca0202d89024ba47b533f9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://39181c42cdc2acf09461557b39bd62a0d894f672fd3f1c1705364a368d29592f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b5dc299f2df4702ec13846fdd440bb7837d6b021ba59cf251dddaeb8ac5c7929\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fcec6c0904d224ae234a0d13f5f860867f300efbb14b79364cfcb1b93d847e85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e2784e997043fbc998a73aad5cd065160a4cea790b697f0e97913ef0d47f6709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92f23325f8e3ae8a218d0adb28e8244bafee0ea70d665bc4b740ac87f99d0ccb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.760182 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85327799-8e01-4084-99a8-f2c26b046940\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.776380 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.784802 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.784853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.784866 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.784885 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.784898 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:01Z","lastTransitionTime":"2026-01-10T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.794040 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-w95z8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11fe83f0-d81e-4bb1-8ba5-a4a41a5b4073\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f25e34ff3a8968898a85040c5c4c328c6628f7bcae065c9c9dff86613f4833d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlbq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:30Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-w95z8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.812944 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d25e174-61f6-4b97-8e20-dcb9d255f116\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7ede059513d98aa9dcb9b508b71ed42a4fa048fa212b623cabb7b28ed1ab825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b1807f150027b0221a0ca4ea41a955d15b8dfd0c9128f7c7ac5ebd7c456fb1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7p6n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:43Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7zmfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.827953 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9nv84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6741fd18-31c0-4bc3-be74-c0f6080c67af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rg7n6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9nv84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.847286 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b03534d-e699-4409-8a82-c68558518e14\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://465a24b8df901ea812cf920a0b25272e7f4e14efa834d2886fa5d8314b6df8b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28045d7bcba3436b005cf83fdb76d5e5a8265b42b3197ac5482edfc11ab603c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58eaa8807fe7d8c766b64c7b8a5a4bc5f93e96433667ff868575a92e91b5065d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.869486 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d04397eb8ade02139c6c404bb799d99eb624a0f6051b2a83510e50c9930a5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.888762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.888841 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.888868 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.888901 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.888921 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:01Z","lastTransitionTime":"2026-01-10T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.889477 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.906377 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed6c297468f6e4c13c64e99b6486fdd47f5aee88292f73d3149ef7c93337ec4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3412fe53b19bdfd77b06a7539f8e49f9f9ef6363d66b445dc6de5603943cfc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.923860 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-t7gh2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34d87e8a-cdfb-46ed-97db-2d07cffec516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:18Z\\\",\\\"message\\\":\\\"2026-01-10T06:46:33+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43\\\\n2026-01-10T06:46:33+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1b2a3474-552a-4518-a314-dfad40680b43 to /host/opt/cni/bin/\\\\n2026-01-10T06:46:33Z [verbose] multus-daemon started\\\\n2026-01-10T06:46:33Z [verbose] Readiness Indicator file check\\\\n2026-01-10T06:47:18Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jcjhj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-t7gh2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.937817 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73649741-6005-4ee3-8b33-7b703540835e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21f6e9c2377479f0dbcf35b4bff4a68aad26b73fdda29fce3f8c948b78b9f7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2476fd308ae9f76ddb6a417047c18a72a72626d1c7405588edf7dff230dfa673\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93ffc73a85bc4ab60ab089973568ba229323a7b18204c1703c25857863ac42dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d73c1f687d68ba211351ca114caec2f4944b0953cd8daeb79a4f9aa7573c4a96\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c81b1c7657659d11127dbc5c68e138c52a101f7c62a10ee134db145a5483184e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://845f86ce2ef92e202d8611f3b1a4bd2abe0e0a162e04b6e55314acb70cdc8549\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d49526e066e15929be1fc8952215932df5b637efda3d8e6727a1e9c885e38332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l29s7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dwv4g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.965622 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce51084-e094-437c-a988-66b17982fd5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-10T06:47:34Z\\\",\\\"message\\\":\\\"ntroller.go:776] Recording success event on pod openshift-multus/multus-t7gh2\\\\nI0110 06:47:33.879634 6822 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-lmtrv\\\\nI0110 06:47:33.879373 6822 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879634 6822 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0110 06:47:33.879649 6822 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879658 6822 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0110 06:47:33.879663 6822 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0110 06:47:33.879667 6822 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0110 06:47:33.879538 6822 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-cr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-10T06:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55f7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-t4zqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.986591 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97b90fcb-ba0a-4459-aab4-bbda55a86a04\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8461d4cd11affc42e44bdbd5dc449c7718999c047f8ed2af4120d7a63cd09d40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bebcb1e8d8ccb6a5fb41bf56e4d42794017c879df92356ee034d17842328e152\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb162db897d258158f60bfebb6a2cc62ca45241c789c4c43c439a09fe5201c05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1b79f1a2fc2311f4d1e9494cb2188e8d511564452bfec9bd7fc1ade59a012ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-10T06:46:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-10T06:46:12Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:01Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.991165 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.991248 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.991265 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.991287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:01 crc kubenswrapper[4810]: I0110 06:48:01.991305 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:01Z","lastTransitionTime":"2026-01-10T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.003541 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:02Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.016906 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05927feccedeb2f570055d7767638634958f2036f3d9dc3859f27b759637f8f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:02Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.029775 4810 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5b79429-9259-412f-bab8-27865ab7029b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-10T06:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://143c7d94cefbc1d9477380e309294cd945211d6fb3c20c76874eea428d18885d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-10T06:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv7jd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-10T06:46:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-8c5qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:02Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.095281 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.095362 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.095384 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.095409 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.095427 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.198543 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.198589 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.198601 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.198619 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.198630 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.301303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.301340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.301349 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.301363 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.301371 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.403926 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.404658 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.404817 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.404958 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.405094 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.508626 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.508988 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.509141 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.509321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.509606 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.612723 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.612997 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.613108 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.613185 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.613280 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.692133 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:02 crc kubenswrapper[4810]: E0110 06:48:02.692348 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.692588 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.692158 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:02 crc kubenswrapper[4810]: E0110 06:48:02.692777 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:02 crc kubenswrapper[4810]: E0110 06:48:02.693085 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.716707 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.716759 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.716778 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.716805 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.716823 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.819594 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.819905 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.820131 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.820387 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.820568 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.877546 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.877623 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.877672 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.877704 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.877724 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: E0110 06:48:02.898929 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:02Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.904466 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.904525 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.904544 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.904569 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.904587 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: E0110 06:48:02.924837 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:02Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.930303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.930356 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.930375 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.930399 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.930423 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: E0110 06:48:02.950136 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:02Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.955575 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.955656 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.955684 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.955714 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.955735 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:02 crc kubenswrapper[4810]: E0110 06:48:02.976322 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:02Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.983223 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.983294 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.983313 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.983340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:02 crc kubenswrapper[4810]: I0110 06:48:02.983358 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:02Z","lastTransitionTime":"2026-01-10T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:03 crc kubenswrapper[4810]: E0110 06:48:03.014369 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-10T06:48:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f454f6a0-7590-4048-bb2d-55af2f1576d0\\\",\\\"systemUUID\\\":\\\"0be3815d-a057-4f47-a377-5918543441fa\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-10T06:48:03Z is after 2025-08-24T17:21:41Z" Jan 10 06:48:03 crc kubenswrapper[4810]: E0110 06:48:03.014686 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.016709 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.016767 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.016793 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.016818 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.016836 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:03Z","lastTransitionTime":"2026-01-10T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.120334 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.120397 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.120417 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.120440 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.120459 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:03Z","lastTransitionTime":"2026-01-10T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.223337 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.223395 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.223416 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.223445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.223467 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:03Z","lastTransitionTime":"2026-01-10T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.326465 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.326499 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.326510 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.326526 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.326536 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:03Z","lastTransitionTime":"2026-01-10T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.429248 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.429313 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.429335 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.429361 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.429380 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:03Z","lastTransitionTime":"2026-01-10T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.532144 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.532259 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.532286 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.532317 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.532343 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:03Z","lastTransitionTime":"2026-01-10T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.635562 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.635625 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.635668 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.635703 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.635725 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:03Z","lastTransitionTime":"2026-01-10T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.692541 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:03 crc kubenswrapper[4810]: E0110 06:48:03.692677 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.738340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.738374 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.738383 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.738395 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.738403 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:03Z","lastTransitionTime":"2026-01-10T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.842279 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.842338 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.842354 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.842376 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.842392 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:03Z","lastTransitionTime":"2026-01-10T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.945658 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.945750 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.945768 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.945822 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:03 crc kubenswrapper[4810]: I0110 06:48:03.945840 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:03Z","lastTransitionTime":"2026-01-10T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.049321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.049366 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.049384 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.049406 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.049426 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:04Z","lastTransitionTime":"2026-01-10T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.152228 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.152279 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.152298 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.152323 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.152342 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:04Z","lastTransitionTime":"2026-01-10T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.255236 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.255283 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.255298 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.255316 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.255330 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:04Z","lastTransitionTime":"2026-01-10T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.358174 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.358294 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.358315 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.358340 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.358359 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:04Z","lastTransitionTime":"2026-01-10T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.460892 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.460971 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.460994 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.461026 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.461080 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:04Z","lastTransitionTime":"2026-01-10T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.563959 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.564024 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.564044 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.564068 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.564087 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:04Z","lastTransitionTime":"2026-01-10T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.667669 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.667735 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.667752 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.667777 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.667794 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:04Z","lastTransitionTime":"2026-01-10T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.692431 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.692546 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.692810 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:04 crc kubenswrapper[4810]: E0110 06:48:04.693034 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:04 crc kubenswrapper[4810]: E0110 06:48:04.693169 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:04 crc kubenswrapper[4810]: E0110 06:48:04.693280 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.770464 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.770518 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.770536 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.770560 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.770577 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:04Z","lastTransitionTime":"2026-01-10T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.873493 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.873553 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.873569 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.873592 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.873608 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:04Z","lastTransitionTime":"2026-01-10T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.976711 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.976790 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.976814 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.976843 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:04 crc kubenswrapper[4810]: I0110 06:48:04.976914 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:04Z","lastTransitionTime":"2026-01-10T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.079586 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.079662 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.079688 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.079716 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.079737 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:05Z","lastTransitionTime":"2026-01-10T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.183454 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.183493 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.183504 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.183520 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.183533 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:05Z","lastTransitionTime":"2026-01-10T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.286137 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.286233 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.286252 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.286276 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.286293 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:05Z","lastTransitionTime":"2026-01-10T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.389417 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.389495 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.389520 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.389552 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.389569 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:05Z","lastTransitionTime":"2026-01-10T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.441992 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7gh2_34d87e8a-cdfb-46ed-97db-2d07cffec516/kube-multus/1.log" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.443405 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7gh2_34d87e8a-cdfb-46ed-97db-2d07cffec516/kube-multus/0.log" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.443490 4810 generic.go:334] "Generic (PLEG): container finished" podID="34d87e8a-cdfb-46ed-97db-2d07cffec516" containerID="41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94" exitCode=1 Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.443717 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7gh2" event={"ID":"34d87e8a-cdfb-46ed-97db-2d07cffec516","Type":"ContainerDied","Data":"41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94"} Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.443765 4810 scope.go:117] "RemoveContainer" containerID="90796b582ede5bb8f3e15ba433846fddfec8a6cf91a02c01b5c6b73f5f958fae" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.444741 4810 scope.go:117] "RemoveContainer" containerID="41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94" Jan 10 06:48:05 crc kubenswrapper[4810]: E0110 06:48:05.449868 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-t7gh2_openshift-multus(34d87e8a-cdfb-46ed-97db-2d07cffec516)\"" pod="openshift-multus/multus-t7gh2" podUID="34d87e8a-cdfb-46ed-97db-2d07cffec516" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.494421 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.497511 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.497567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.497625 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.497650 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:05Z","lastTransitionTime":"2026-01-10T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.512485 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podStartSLOduration=95.51245158 podStartE2EDuration="1m35.51245158s" podCreationTimestamp="2026-01-10 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:05.512091842 +0000 UTC m=+114.127584765" watchObservedRunningTime="2026-01-10 06:48:05.51245158 +0000 UTC m=+114.127944503" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.587984 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dwv4g" podStartSLOduration=94.587954168 podStartE2EDuration="1m34.587954168s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:05.557764999 +0000 UTC m=+114.173257972" watchObservedRunningTime="2026-01-10 06:48:05.587954168 +0000 UTC m=+114.203447111" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.605605 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.605682 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.605703 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.605729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.605749 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:05Z","lastTransitionTime":"2026-01-10T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.613271 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=68.613242617 podStartE2EDuration="1m8.613242617s" podCreationTimestamp="2026-01-10 06:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:05.612038068 +0000 UTC m=+114.227530951" watchObservedRunningTime="2026-01-10 06:48:05.613242617 +0000 UTC m=+114.228735570" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.642508 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=95.642480383 podStartE2EDuration="1m35.642480383s" podCreationTimestamp="2026-01-10 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:05.640447413 +0000 UTC m=+114.255940336" watchObservedRunningTime="2026-01-10 06:48:05.642480383 +0000 UTC m=+114.257973306" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.660703 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=96.660681878 podStartE2EDuration="1m36.660681878s" podCreationTimestamp="2026-01-10 06:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:05.660574056 +0000 UTC m=+114.276066999" watchObservedRunningTime="2026-01-10 06:48:05.660681878 +0000 UTC m=+114.276174761" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.690301 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lmtrv" podStartSLOduration=95.690234982 podStartE2EDuration="1m35.690234982s" podCreationTimestamp="2026-01-10 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:05.689634097 +0000 UTC m=+114.305126990" watchObservedRunningTime="2026-01-10 06:48:05.690234982 +0000 UTC m=+114.305727865" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.692732 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:05 crc kubenswrapper[4810]: E0110 06:48:05.692927 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.702658 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=33.702640835 podStartE2EDuration="33.702640835s" podCreationTimestamp="2026-01-10 06:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:05.701941238 +0000 UTC m=+114.317434131" watchObservedRunningTime="2026-01-10 06:48:05.702640835 +0000 UTC m=+114.318133718" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.707463 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.707503 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.707518 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.707533 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.707545 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:05Z","lastTransitionTime":"2026-01-10T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.713862 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-w95z8" podStartSLOduration=96.71384788 podStartE2EDuration="1m36.71384788s" podCreationTimestamp="2026-01-10 06:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:05.713410008 +0000 UTC m=+114.328902921" watchObservedRunningTime="2026-01-10 06:48:05.71384788 +0000 UTC m=+114.329340763" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.778213 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7zmfj" podStartSLOduration=94.778175784 podStartE2EDuration="1m34.778175784s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:05.777913718 +0000 UTC m=+114.393406621" watchObservedRunningTime="2026-01-10 06:48:05.778175784 +0000 UTC m=+114.393668667" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.809298 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.809341 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.809351 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.809366 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.809378 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:05Z","lastTransitionTime":"2026-01-10T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.810871 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=95.810856764 podStartE2EDuration="1m35.810856764s" podCreationTimestamp="2026-01-10 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:05.810440024 +0000 UTC m=+114.425932907" watchObservedRunningTime="2026-01-10 06:48:05.810856764 +0000 UTC m=+114.426349647" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.912086 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.912135 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.912147 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.912168 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:05 crc kubenswrapper[4810]: I0110 06:48:05.912179 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:05Z","lastTransitionTime":"2026-01-10T06:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.015698 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.016164 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.016430 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.016623 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.016793 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:06Z","lastTransitionTime":"2026-01-10T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.120439 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.120550 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.120575 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.120605 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.120629 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:06Z","lastTransitionTime":"2026-01-10T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.224004 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.224071 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.224093 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.224125 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.224147 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:06Z","lastTransitionTime":"2026-01-10T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.327538 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.327661 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.327684 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.327707 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.327724 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:06Z","lastTransitionTime":"2026-01-10T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.430736 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.430850 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.430871 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.430894 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.430910 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:06Z","lastTransitionTime":"2026-01-10T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.449098 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7gh2_34d87e8a-cdfb-46ed-97db-2d07cffec516/kube-multus/1.log" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.533999 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.534064 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.534086 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.534112 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.534133 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:06Z","lastTransitionTime":"2026-01-10T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.637938 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.638009 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.638029 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.638056 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.638075 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:06Z","lastTransitionTime":"2026-01-10T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.692977 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.693137 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.693498 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:06 crc kubenswrapper[4810]: E0110 06:48:06.693777 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:06 crc kubenswrapper[4810]: E0110 06:48:06.693956 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:06 crc kubenswrapper[4810]: E0110 06:48:06.694221 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.741730 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.741807 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.741827 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.741849 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.741866 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:06Z","lastTransitionTime":"2026-01-10T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.845512 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.845571 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.845587 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.845611 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.845629 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:06Z","lastTransitionTime":"2026-01-10T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.947762 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.947853 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.947902 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.947927 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:06 crc kubenswrapper[4810]: I0110 06:48:06.947943 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:06Z","lastTransitionTime":"2026-01-10T06:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.051228 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.051304 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.051327 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.051353 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.051376 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:07Z","lastTransitionTime":"2026-01-10T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.154146 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.154230 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.154276 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.154302 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.154320 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:07Z","lastTransitionTime":"2026-01-10T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.256954 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.257001 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.257018 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.257043 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.257061 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:07Z","lastTransitionTime":"2026-01-10T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.360484 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.360554 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.360581 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.360625 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.360653 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:07Z","lastTransitionTime":"2026-01-10T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.463701 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.463767 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.463793 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.463823 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.463844 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:07Z","lastTransitionTime":"2026-01-10T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.566339 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.566384 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.566396 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.566414 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.566428 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:07Z","lastTransitionTime":"2026-01-10T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.669461 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.669519 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.669539 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.669564 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.669583 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:07Z","lastTransitionTime":"2026-01-10T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.692385 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:07 crc kubenswrapper[4810]: E0110 06:48:07.692577 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.772728 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.772818 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.772846 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.772877 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.772900 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:07Z","lastTransitionTime":"2026-01-10T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.875751 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.875884 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.875904 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.875929 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.875948 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:07Z","lastTransitionTime":"2026-01-10T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.979236 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.979299 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.979317 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.979342 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:07 crc kubenswrapper[4810]: I0110 06:48:07.979359 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:07Z","lastTransitionTime":"2026-01-10T06:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.082515 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.082593 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.082616 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.082649 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.082672 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:08Z","lastTransitionTime":"2026-01-10T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.185485 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.185537 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.185555 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.185578 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.185595 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:08Z","lastTransitionTime":"2026-01-10T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.288518 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.288695 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.288729 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.288814 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.288838 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:08Z","lastTransitionTime":"2026-01-10T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.391506 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.391567 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.391585 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.391610 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.391627 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:08Z","lastTransitionTime":"2026-01-10T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.494120 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.494161 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.494174 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.494209 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.494223 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:08Z","lastTransitionTime":"2026-01-10T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.596564 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.597185 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.597291 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.597376 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.597457 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:08Z","lastTransitionTime":"2026-01-10T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.692792 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.692836 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:08 crc kubenswrapper[4810]: E0110 06:48:08.692985 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:08 crc kubenswrapper[4810]: E0110 06:48:08.693089 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.693531 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:08 crc kubenswrapper[4810]: E0110 06:48:08.693898 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.700083 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.700287 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.700398 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.700498 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.700594 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:08Z","lastTransitionTime":"2026-01-10T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.804085 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.804145 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.804169 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.804228 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.804251 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:08Z","lastTransitionTime":"2026-01-10T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.907013 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.907070 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.907089 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.907113 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:08 crc kubenswrapper[4810]: I0110 06:48:08.907131 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:08Z","lastTransitionTime":"2026-01-10T06:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.010108 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.010148 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.010160 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.010176 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.010189 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:09Z","lastTransitionTime":"2026-01-10T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.112786 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.112851 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.112870 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.112895 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.112914 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:09Z","lastTransitionTime":"2026-01-10T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.216292 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.216384 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.216405 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.216430 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.216447 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:09Z","lastTransitionTime":"2026-01-10T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.320030 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.320094 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.320116 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.320149 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.320172 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:09Z","lastTransitionTime":"2026-01-10T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.422244 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.422291 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.422310 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.422334 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.422351 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:09Z","lastTransitionTime":"2026-01-10T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.524826 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.524900 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.524926 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.524958 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.524982 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:09Z","lastTransitionTime":"2026-01-10T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.628496 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.628557 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.628574 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.628597 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.628614 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:09Z","lastTransitionTime":"2026-01-10T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.693059 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:09 crc kubenswrapper[4810]: E0110 06:48:09.693273 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.730885 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.730945 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.730965 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.730989 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.731007 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:09Z","lastTransitionTime":"2026-01-10T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.834276 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.834347 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.834363 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.834388 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.834405 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:09Z","lastTransitionTime":"2026-01-10T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.937577 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.937641 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.937658 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.937686 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:09 crc kubenswrapper[4810]: I0110 06:48:09.937705 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:09Z","lastTransitionTime":"2026-01-10T06:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.040674 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.040733 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.040747 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.040768 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.040781 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:10Z","lastTransitionTime":"2026-01-10T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.143901 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.143974 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.143997 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.144026 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.144047 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:10Z","lastTransitionTime":"2026-01-10T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.246953 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.247010 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.247025 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.247043 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.247055 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:10Z","lastTransitionTime":"2026-01-10T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.350353 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.350410 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.350427 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.350445 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.350459 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:10Z","lastTransitionTime":"2026-01-10T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.453627 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.453677 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.453689 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.453708 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.453721 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:10Z","lastTransitionTime":"2026-01-10T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.555865 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.555914 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.555923 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.555937 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.555946 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:10Z","lastTransitionTime":"2026-01-10T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.658951 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.659014 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.659038 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.659067 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.659088 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:10Z","lastTransitionTime":"2026-01-10T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.692241 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.692287 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.692255 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:10 crc kubenswrapper[4810]: E0110 06:48:10.692625 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:10 crc kubenswrapper[4810]: E0110 06:48:10.692763 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:10 crc kubenswrapper[4810]: E0110 06:48:10.692920 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.762385 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.762432 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.762452 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.762475 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.762491 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:10Z","lastTransitionTime":"2026-01-10T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.865595 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.865645 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.865665 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.865689 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.865707 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:10Z","lastTransitionTime":"2026-01-10T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.968730 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.968825 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.968843 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.968866 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:10 crc kubenswrapper[4810]: I0110 06:48:10.968883 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:10Z","lastTransitionTime":"2026-01-10T06:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.071465 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.071536 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.071554 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.071578 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.071649 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:11Z","lastTransitionTime":"2026-01-10T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.175321 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.175401 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.175422 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.175453 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.175475 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:11Z","lastTransitionTime":"2026-01-10T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.278089 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.278243 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.278270 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.278303 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.278326 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:11Z","lastTransitionTime":"2026-01-10T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.381689 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.381755 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.381778 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.381808 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.381828 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:11Z","lastTransitionTime":"2026-01-10T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.485453 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.485537 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.485556 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.485580 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.485598 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:11Z","lastTransitionTime":"2026-01-10T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.589132 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.589179 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.589205 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.589223 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.589234 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:11Z","lastTransitionTime":"2026-01-10T06:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:11 crc kubenswrapper[4810]: E0110 06:48:11.689439 4810 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.691909 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:11 crc kubenswrapper[4810]: E0110 06:48:11.693815 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:11 crc kubenswrapper[4810]: I0110 06:48:11.694972 4810 scope.go:117] "RemoveContainer" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:48:11 crc kubenswrapper[4810]: E0110 06:48:11.695294 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-t4zqb_openshift-ovn-kubernetes(dce51084-e094-437c-a988-66b17982fd5d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" Jan 10 06:48:11 crc kubenswrapper[4810]: E0110 06:48:11.798075 4810 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 10 06:48:12 crc kubenswrapper[4810]: I0110 06:48:12.692335 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:12 crc kubenswrapper[4810]: I0110 06:48:12.692374 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:12 crc kubenswrapper[4810]: I0110 06:48:12.692372 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:12 crc kubenswrapper[4810]: E0110 06:48:12.692568 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:12 crc kubenswrapper[4810]: E0110 06:48:12.692697 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:12 crc kubenswrapper[4810]: E0110 06:48:12.692783 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.349422 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.349493 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.349512 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.349538 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.349558 4810 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-10T06:48:13Z","lastTransitionTime":"2026-01-10T06:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.413137 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95"] Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.413788 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.416056 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.416632 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.416751 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.416801 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.517004 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.517231 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.517337 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.517379 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.517439 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.618285 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.618710 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.619003 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.619295 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.619505 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.618439 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.619138 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.620401 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.625844 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.647719 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-m9b95\" (UID: \"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.692573 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:13 crc kubenswrapper[4810]: E0110 06:48:13.693516 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:13 crc kubenswrapper[4810]: I0110 06:48:13.742513 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" Jan 10 06:48:13 crc kubenswrapper[4810]: W0110 06:48:13.772050 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fd7f6df_efc5_4cd7_bd01_ee6b71cd48bc.slice/crio-81e70f255243fd7140328d4d1ccd2b4f1fbdf9779f58153236213bac0b7ddc7a WatchSource:0}: Error finding container 81e70f255243fd7140328d4d1ccd2b4f1fbdf9779f58153236213bac0b7ddc7a: Status 404 returned error can't find the container with id 81e70f255243fd7140328d4d1ccd2b4f1fbdf9779f58153236213bac0b7ddc7a Jan 10 06:48:14 crc kubenswrapper[4810]: I0110 06:48:14.477704 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" event={"ID":"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc","Type":"ContainerStarted","Data":"c9be3756a387d81d719741cb139389c8b7212c29c4f9f8fbcb4b7f2afd6ee0c0"} Jan 10 06:48:14 crc kubenswrapper[4810]: I0110 06:48:14.477764 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" event={"ID":"3fd7f6df-efc5-4cd7-bd01-ee6b71cd48bc","Type":"ContainerStarted","Data":"81e70f255243fd7140328d4d1ccd2b4f1fbdf9779f58153236213bac0b7ddc7a"} Jan 10 06:48:14 crc kubenswrapper[4810]: I0110 06:48:14.497693 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m9b95" podStartSLOduration=104.497671626 podStartE2EDuration="1m44.497671626s" podCreationTimestamp="2026-01-10 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:14.497103313 +0000 UTC m=+123.112596246" watchObservedRunningTime="2026-01-10 06:48:14.497671626 +0000 UTC m=+123.113164529" Jan 10 06:48:14 crc kubenswrapper[4810]: I0110 06:48:14.692311 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:14 crc kubenswrapper[4810]: I0110 06:48:14.692356 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:14 crc kubenswrapper[4810]: I0110 06:48:14.692530 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:14 crc kubenswrapper[4810]: E0110 06:48:14.692850 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:14 crc kubenswrapper[4810]: E0110 06:48:14.692722 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:14 crc kubenswrapper[4810]: E0110 06:48:14.692954 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:15 crc kubenswrapper[4810]: I0110 06:48:15.692808 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:15 crc kubenswrapper[4810]: E0110 06:48:15.692975 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:16 crc kubenswrapper[4810]: I0110 06:48:16.692400 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:16 crc kubenswrapper[4810]: I0110 06:48:16.692502 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:16 crc kubenswrapper[4810]: I0110 06:48:16.692881 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:16 crc kubenswrapper[4810]: E0110 06:48:16.693067 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:16 crc kubenswrapper[4810]: E0110 06:48:16.693278 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:16 crc kubenswrapper[4810]: E0110 06:48:16.693453 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:16 crc kubenswrapper[4810]: E0110 06:48:16.799381 4810 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 10 06:48:17 crc kubenswrapper[4810]: I0110 06:48:17.692118 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:17 crc kubenswrapper[4810]: E0110 06:48:17.692348 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:18 crc kubenswrapper[4810]: I0110 06:48:18.691933 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:18 crc kubenswrapper[4810]: I0110 06:48:18.692048 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:18 crc kubenswrapper[4810]: E0110 06:48:18.692124 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:18 crc kubenswrapper[4810]: E0110 06:48:18.692286 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:18 crc kubenswrapper[4810]: I0110 06:48:18.691933 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:18 crc kubenswrapper[4810]: I0110 06:48:18.692775 4810 scope.go:117] "RemoveContainer" containerID="41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94" Jan 10 06:48:18 crc kubenswrapper[4810]: E0110 06:48:18.692791 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:19 crc kubenswrapper[4810]: I0110 06:48:19.496650 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7gh2_34d87e8a-cdfb-46ed-97db-2d07cffec516/kube-multus/1.log" Jan 10 06:48:19 crc kubenswrapper[4810]: I0110 06:48:19.496730 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7gh2" event={"ID":"34d87e8a-cdfb-46ed-97db-2d07cffec516","Type":"ContainerStarted","Data":"119ef4f78bf3e22db81c778c1d3560cd9a432d4c452b44055df285c9a18fa3f0"} Jan 10 06:48:19 crc kubenswrapper[4810]: I0110 06:48:19.520981 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-t7gh2" podStartSLOduration=108.520952132 podStartE2EDuration="1m48.520952132s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:19.518024371 +0000 UTC m=+128.133517254" watchObservedRunningTime="2026-01-10 06:48:19.520952132 +0000 UTC m=+128.136445045" Jan 10 06:48:19 crc kubenswrapper[4810]: I0110 06:48:19.692920 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:19 crc kubenswrapper[4810]: E0110 06:48:19.693075 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:20 crc kubenswrapper[4810]: I0110 06:48:20.692456 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:20 crc kubenswrapper[4810]: I0110 06:48:20.692553 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:20 crc kubenswrapper[4810]: E0110 06:48:20.692616 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:20 crc kubenswrapper[4810]: E0110 06:48:20.692733 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:20 crc kubenswrapper[4810]: I0110 06:48:20.692857 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:20 crc kubenswrapper[4810]: E0110 06:48:20.692977 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:21 crc kubenswrapper[4810]: I0110 06:48:21.692668 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:21 crc kubenswrapper[4810]: E0110 06:48:21.693640 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:21 crc kubenswrapper[4810]: E0110 06:48:21.799898 4810 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 10 06:48:22 crc kubenswrapper[4810]: I0110 06:48:22.692564 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:22 crc kubenswrapper[4810]: I0110 06:48:22.692573 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:22 crc kubenswrapper[4810]: E0110 06:48:22.692771 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:22 crc kubenswrapper[4810]: I0110 06:48:22.692885 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:22 crc kubenswrapper[4810]: E0110 06:48:22.693054 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:22 crc kubenswrapper[4810]: E0110 06:48:22.693141 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:23 crc kubenswrapper[4810]: I0110 06:48:23.692820 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:23 crc kubenswrapper[4810]: E0110 06:48:23.693400 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:23 crc kubenswrapper[4810]: I0110 06:48:23.693644 4810 scope.go:117] "RemoveContainer" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:48:24 crc kubenswrapper[4810]: I0110 06:48:24.515729 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/3.log" Jan 10 06:48:24 crc kubenswrapper[4810]: I0110 06:48:24.519598 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerStarted","Data":"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833"} Jan 10 06:48:24 crc kubenswrapper[4810]: I0110 06:48:24.520062 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:48:24 crc kubenswrapper[4810]: I0110 06:48:24.558313 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podStartSLOduration=113.558287101 podStartE2EDuration="1m53.558287101s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:24.555705643 +0000 UTC m=+133.171198566" watchObservedRunningTime="2026-01-10 06:48:24.558287101 +0000 UTC m=+133.173780024" Jan 10 06:48:24 crc kubenswrapper[4810]: I0110 06:48:24.692879 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:24 crc kubenswrapper[4810]: I0110 06:48:24.692957 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:24 crc kubenswrapper[4810]: E0110 06:48:24.693018 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:24 crc kubenswrapper[4810]: I0110 06:48:24.692903 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:24 crc kubenswrapper[4810]: E0110 06:48:24.693121 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:24 crc kubenswrapper[4810]: E0110 06:48:24.693324 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:24 crc kubenswrapper[4810]: I0110 06:48:24.756048 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9nv84"] Jan 10 06:48:24 crc kubenswrapper[4810]: I0110 06:48:24.756243 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:24 crc kubenswrapper[4810]: E0110 06:48:24.756382 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:26 crc kubenswrapper[4810]: I0110 06:48:26.692446 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:26 crc kubenswrapper[4810]: I0110 06:48:26.692539 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:26 crc kubenswrapper[4810]: I0110 06:48:26.692462 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:26 crc kubenswrapper[4810]: E0110 06:48:26.692625 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:26 crc kubenswrapper[4810]: I0110 06:48:26.692656 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:26 crc kubenswrapper[4810]: E0110 06:48:26.692712 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:26 crc kubenswrapper[4810]: E0110 06:48:26.692799 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:26 crc kubenswrapper[4810]: E0110 06:48:26.692919 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:26 crc kubenswrapper[4810]: E0110 06:48:26.801280 4810 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 10 06:48:28 crc kubenswrapper[4810]: I0110 06:48:28.692111 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:28 crc kubenswrapper[4810]: I0110 06:48:28.692216 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:28 crc kubenswrapper[4810]: I0110 06:48:28.692252 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:28 crc kubenswrapper[4810]: I0110 06:48:28.692380 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:28 crc kubenswrapper[4810]: E0110 06:48:28.692379 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:28 crc kubenswrapper[4810]: E0110 06:48:28.692572 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:28 crc kubenswrapper[4810]: E0110 06:48:28.692653 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:28 crc kubenswrapper[4810]: E0110 06:48:28.692770 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:30 crc kubenswrapper[4810]: I0110 06:48:30.691908 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:30 crc kubenswrapper[4810]: I0110 06:48:30.691954 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:30 crc kubenswrapper[4810]: E0110 06:48:30.692084 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 10 06:48:30 crc kubenswrapper[4810]: I0110 06:48:30.692147 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:30 crc kubenswrapper[4810]: I0110 06:48:30.692251 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:30 crc kubenswrapper[4810]: E0110 06:48:30.692404 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9nv84" podUID="6741fd18-31c0-4bc3-be74-c0f6080c67af" Jan 10 06:48:30 crc kubenswrapper[4810]: E0110 06:48:30.692462 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 10 06:48:30 crc kubenswrapper[4810]: E0110 06:48:30.692544 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 10 06:48:32 crc kubenswrapper[4810]: I0110 06:48:32.691999 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:32 crc kubenswrapper[4810]: I0110 06:48:32.692105 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:32 crc kubenswrapper[4810]: I0110 06:48:32.692412 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:32 crc kubenswrapper[4810]: I0110 06:48:32.692503 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:32 crc kubenswrapper[4810]: I0110 06:48:32.695261 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 10 06:48:32 crc kubenswrapper[4810]: I0110 06:48:32.695689 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 10 06:48:32 crc kubenswrapper[4810]: I0110 06:48:32.695306 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 10 06:48:32 crc kubenswrapper[4810]: I0110 06:48:32.695868 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 10 06:48:32 crc kubenswrapper[4810]: I0110 06:48:32.696129 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 10 06:48:32 crc kubenswrapper[4810]: I0110 06:48:32.696741 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.506509 4810 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.558207 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nxhxd"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.559343 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.562917 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.563350 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.563573 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.563852 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.564579 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.565177 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.573477 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.573961 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.575482 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hqsj6"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.577012 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9dnp6"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.578374 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t88kw"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.596082 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.596740 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-km9lv"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.597208 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.578958 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.577441 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.597734 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.598030 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.598187 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.598142 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.585790 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.598686 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.598955 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.586000 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.586045 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.602914 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.603017 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.603114 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.603173 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.603436 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.603532 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.603575 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-mh9w2"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.603702 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.603801 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.603911 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.604037 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.604132 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.604545 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.604616 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.604186 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.604882 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.604244 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.605097 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.604345 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.605238 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.604757 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.605329 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.605451 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.605554 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.605936 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xw9ln"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.606530 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xw9ln" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.606885 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.606964 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwwrl"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.614913 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.627974 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p68qs"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.639276 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.641898 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.642527 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.642828 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.642885 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.642906 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643011 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643133 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643167 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643231 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643255 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643308 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643340 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643359 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643444 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643533 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643562 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643608 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643634 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643448 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643702 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643715 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643774 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643839 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643882 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643897 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643913 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.643938 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.644002 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.644045 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.644151 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.644052 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.644440 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.644485 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.644489 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.646717 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.646798 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.646869 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.646962 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.647027 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.647088 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.649552 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.651193 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.651337 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.651421 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.651485 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.651552 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.656168 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.657544 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gx7wx"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.658100 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.658514 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.659177 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.659240 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.659377 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.659494 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.659586 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.659628 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.659694 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.659912 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.660061 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.660463 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.665489 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.665872 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.666099 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.666399 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.667077 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.667320 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.668777 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fpl4m"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.669374 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.669413 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.670087 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.671141 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.671863 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.672408 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.672499 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.672737 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.672956 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.673116 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.673302 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.673670 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.674318 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.675267 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.675412 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.676766 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.686560 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nqtg7"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.686632 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.686931 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.689858 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.715843 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.716226 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.716812 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.719274 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.721368 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.721552 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.729624 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.735262 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.736149 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nxhxd"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.736223 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.736435 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.736769 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7vnnq"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.737112 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pfgf2"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.737505 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.737900 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.737923 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.738400 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.738778 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.738993 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.739217 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.739425 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743720 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-console-config\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743756 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-client-ca\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743777 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf5m9\" (UniqueName: \"kubernetes.io/projected/c67349c1-8617-4228-b6e9-009b94caab7a-kube-api-access-zf5m9\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743794 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e755f5b-0a8c-4108-a133-8d5955de3641-encryption-config\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743811 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvp65\" (UniqueName: \"kubernetes.io/projected/7706c50d-1b6d-4add-b687-3f59c6e080d2-kube-api-access-pvp65\") pod \"openshift-controller-manager-operator-756b6f6bc6-hqwc6\" (UID: \"7706c50d-1b6d-4add-b687-3f59c6e080d2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743831 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/db211a74-cc64-4820-b576-179f6affa220-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p68qs\" (UID: \"db211a74-cc64-4820-b576-179f6affa220\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743848 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c67349c1-8617-4228-b6e9-009b94caab7a-etcd-client\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743864 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e755f5b-0a8c-4108-a133-8d5955de3641-etcd-client\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743882 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ad4afbe-31ce-4f61-866c-390b33da3bbb-console-oauth-config\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743898 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c62a01-c818-47b4-92fd-bcd87d8218a8-config\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743914 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-config\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743930 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194fadc9-26fd-44d7-84db-14d442ba6dea-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zk2r2\" (UID: \"194fadc9-26fd-44d7-84db-14d442ba6dea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743947 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/85c62a01-c818-47b4-92fd-bcd87d8218a8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743966 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/058be455-c482-4531-9bd8-08b4013e7d3c-trusted-ca\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.743990 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744003 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88zmh\" (UniqueName: \"kubernetes.io/projected/7e755f5b-0a8c-4108-a133-8d5955de3641-kube-api-access-88zmh\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744016 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db211a74-cc64-4820-b576-179f6affa220-serving-cert\") pod \"openshift-config-operator-7777fb866f-p68qs\" (UID: \"db211a74-cc64-4820-b576-179f6affa220\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744032 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x5sg\" (UniqueName: \"kubernetes.io/projected/7c7ec181-e310-4832-af7a-d6a5437e565d-kube-api-access-6x5sg\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744047 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c67349c1-8617-4228-b6e9-009b94caab7a-serving-cert\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744062 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c67349c1-8617-4228-b6e9-009b94caab7a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744076 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-oauth-serving-cert\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744090 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsbrb\" (UniqueName: \"kubernetes.io/projected/e65db429-f69c-4a8a-982b-0566164c6296-kube-api-access-nsbrb\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744106 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/196f4a49-7ed6-4346-bbc1-3908eb17eadc-metrics-tls\") pod \"dns-operator-744455d44c-hqsj6\" (UID: \"196f4a49-7ed6-4346-bbc1-3908eb17eadc\") " pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744119 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/058be455-c482-4531-9bd8-08b4013e7d3c-serving-cert\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744132 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-config\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744148 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-client-ca\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744162 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-trusted-ca-bundle\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744178 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx4rw\" (UniqueName: \"kubernetes.io/projected/1ad4afbe-31ce-4f61-866c-390b33da3bbb-kube-api-access-fx4rw\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744196 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k4cc\" (UniqueName: \"kubernetes.io/projected/db211a74-cc64-4820-b576-179f6affa220-kube-api-access-6k4cc\") pod \"openshift-config-operator-7777fb866f-p68qs\" (UID: \"db211a74-cc64-4820-b576-179f6affa220\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744225 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058be455-c482-4531-9bd8-08b4013e7d3c-config\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744238 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85c62a01-c818-47b4-92fd-bcd87d8218a8-images\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744253 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e755f5b-0a8c-4108-a133-8d5955de3641-serving-cert\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744267 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-service-ca\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744283 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4556783-6235-4338-b56d-50146a186e0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-sssf2\" (UID: \"e4556783-6235-4338-b56d-50146a186e0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744296 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7ec181-e310-4832-af7a-d6a5437e565d-serving-cert\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744312 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ca08908-f929-44c1-a0fe-61e028610dd8-proxy-tls\") pod \"machine-config-controller-84d6567774-4fltp\" (UID: \"4ca08908-f929-44c1-a0fe-61e028610dd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744350 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf2k9\" (UniqueName: \"kubernetes.io/projected/85c62a01-c818-47b4-92fd-bcd87d8218a8-kube-api-access-vf2k9\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744366 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c67349c1-8617-4228-b6e9-009b94caab7a-audit-dir\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744379 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-image-import-ca\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744394 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/194fadc9-26fd-44d7-84db-14d442ba6dea-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zk2r2\" (UID: \"194fadc9-26fd-44d7-84db-14d442ba6dea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744410 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nhzn\" (UniqueName: \"kubernetes.io/projected/194fadc9-26fd-44d7-84db-14d442ba6dea-kube-api-access-6nhzn\") pod \"openshift-apiserver-operator-796bbdcf4f-zk2r2\" (UID: \"194fadc9-26fd-44d7-84db-14d442ba6dea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744427 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7706c50d-1b6d-4add-b687-3f59c6e080d2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hqwc6\" (UID: \"7706c50d-1b6d-4add-b687-3f59c6e080d2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744443 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c67349c1-8617-4228-b6e9-009b94caab7a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744456 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n56pv\" (UniqueName: \"kubernetes.io/projected/058be455-c482-4531-9bd8-08b4013e7d3c-kube-api-access-n56pv\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744479 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c67349c1-8617-4228-b6e9-009b94caab7a-encryption-config\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744493 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e755f5b-0a8c-4108-a133-8d5955de3641-node-pullsecrets\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744506 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e65db429-f69c-4a8a-982b-0566164c6296-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744528 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ca08908-f929-44c1-a0fe-61e028610dd8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4fltp\" (UID: \"4ca08908-f929-44c1-a0fe-61e028610dd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744543 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e65db429-f69c-4a8a-982b-0566164c6296-config\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744558 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4556783-6235-4338-b56d-50146a186e0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-sssf2\" (UID: \"e4556783-6235-4338-b56d-50146a186e0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744572 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-config\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744585 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-audit\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744598 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-etcd-serving-ca\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744611 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e755f5b-0a8c-4108-a133-8d5955de3641-audit-dir\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744631 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7706c50d-1b6d-4add-b687-3f59c6e080d2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hqwc6\" (UID: \"7706c50d-1b6d-4add-b687-3f59c6e080d2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744648 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5wg6\" (UniqueName: \"kubernetes.io/projected/196f4a49-7ed6-4346-bbc1-3908eb17eadc-kube-api-access-n5wg6\") pod \"dns-operator-744455d44c-hqsj6\" (UID: \"196f4a49-7ed6-4346-bbc1-3908eb17eadc\") " pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744661 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744677 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad4afbe-31ce-4f61-866c-390b33da3bbb-console-serving-cert\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744691 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wngmk\" (UniqueName: \"kubernetes.io/projected/39b51457-06ae-4cf2-8b78-4f34bb908819-kube-api-access-wngmk\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744705 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e65db429-f69c-4a8a-982b-0566164c6296-service-ca-bundle\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744727 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlffs\" (UniqueName: \"kubernetes.io/projected/4ca08908-f929-44c1-a0fe-61e028610dd8-kube-api-access-zlffs\") pod \"machine-config-controller-84d6567774-4fltp\" (UID: \"4ca08908-f929-44c1-a0fe-61e028610dd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744742 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scrmj\" (UniqueName: \"kubernetes.io/projected/845a1b6a-e7d4-467d-a835-053708fed54f-kube-api-access-scrmj\") pod \"downloads-7954f5f757-xw9ln\" (UID: \"845a1b6a-e7d4-467d-a835-053708fed54f\") " pod="openshift-console/downloads-7954f5f757-xw9ln" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744759 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e65db429-f69c-4a8a-982b-0566164c6296-serving-cert\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744773 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c67349c1-8617-4228-b6e9-009b94caab7a-audit-policies\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744786 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4556783-6235-4338-b56d-50146a186e0d-config\") pod \"kube-apiserver-operator-766d6c64bb-sssf2\" (UID: \"e4556783-6235-4338-b56d-50146a186e0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.744801 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b51457-06ae-4cf2-8b78-4f34bb908819-serving-cert\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.745152 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hqsj6"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.746043 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nrzgt"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.746718 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.757238 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.757827 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.758554 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.759504 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.759645 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fddzq"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.760240 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.760697 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.761151 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.761473 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qhr8m"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.762117 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.764603 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.765301 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.765743 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.766182 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.766290 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.767769 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.768750 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.769141 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.770329 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9dnp6"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.770359 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.771375 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.782311 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mh9w2"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.783945 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t88kw"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.785337 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.787085 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.791536 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwwrl"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.792873 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.794145 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xw9ln"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.795519 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.796598 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-km9lv"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.797550 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.798524 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nrzgt"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.799511 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.801533 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6bmqz"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.802648 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.802690 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gx7wx"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.803909 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p68qs"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.804986 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.806041 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pfgf2"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.807035 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nqtg7"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.807688 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.808056 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.809099 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.810136 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.811352 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6bmqz"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.812430 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.813511 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fpl4m"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.814585 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.816231 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.818263 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.821058 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.824293 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7vnnq"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.826319 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.826437 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.826649 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.827591 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.828721 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qxj7l"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.830656 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.830994 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fddzq"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.831840 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qxj7l"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.833047 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-q59cx"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.836842 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q59cx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.836685 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wnzzr"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.837693 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wnzzr" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.838159 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q59cx"] Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.845797 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7706c50d-1b6d-4add-b687-3f59c6e080d2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hqwc6\" (UID: \"7706c50d-1b6d-4add-b687-3f59c6e080d2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.845855 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a401b43-55ea-45de-8fdd-d00354841be3-trusted-ca\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.846627 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7706c50d-1b6d-4add-b687-3f59c6e080d2-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hqwc6\" (UID: \"7706c50d-1b6d-4add-b687-3f59c6e080d2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.847419 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pct9c\" (UniqueName: \"kubernetes.io/projected/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-kube-api-access-pct9c\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.847472 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.847484 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83df240-de42-4c0d-ba24-ead75566bc23-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x9n66\" (UID: \"d83df240-de42-4c0d-ba24-ead75566bc23\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.847524 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scrmj\" (UniqueName: \"kubernetes.io/projected/845a1b6a-e7d4-467d-a835-053708fed54f-kube-api-access-scrmj\") pod \"downloads-7954f5f757-xw9ln\" (UID: \"845a1b6a-e7d4-467d-a835-053708fed54f\") " pod="openshift-console/downloads-7954f5f757-xw9ln" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.847547 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e65db429-f69c-4a8a-982b-0566164c6296-serving-cert\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848253 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7c3c724b-e4ad-4b31-a6c2-80b82f352ac6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-d85nm\" (UID: \"7c3c724b-e4ad-4b31-a6c2-80b82f352ac6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848312 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tftkn\" (UniqueName: \"kubernetes.io/projected/755f846b-01e4-435d-b86c-cbe3f917aa31-kube-api-access-tftkn\") pod \"marketplace-operator-79b997595-pfgf2\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848340 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-secret-volume\") pod \"collect-profiles-29467125-g742d\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848370 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4556783-6235-4338-b56d-50146a186e0d-config\") pod \"kube-apiserver-operator-766d6c64bb-sssf2\" (UID: \"e4556783-6235-4338-b56d-50146a186e0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848409 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848439 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvp65\" (UniqueName: \"kubernetes.io/projected/7706c50d-1b6d-4add-b687-3f59c6e080d2-kube-api-access-pvp65\") pod \"openshift-controller-manager-operator-756b6f6bc6-hqwc6\" (UID: \"7706c50d-1b6d-4add-b687-3f59c6e080d2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848464 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1adffde5-7978-40ac-8d09-7faff1fae25d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848487 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pfgf2\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848510 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/931833dc-0102-4ca9-9a92-129d9a97170b-signing-key\") pod \"service-ca-9c57cc56f-7vnnq\" (UID: \"931833dc-0102-4ca9-9a92-129d9a97170b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848534 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ad4afbe-31ce-4f61-866c-390b33da3bbb-console-oauth-config\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848557 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-config\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848582 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed76ec22-75d0-43c0-b338-5062f922edda-config\") pod \"kube-controller-manager-operator-78b949d7b-vhcxf\" (UID: \"ed76ec22-75d0-43c0-b338-5062f922edda\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848604 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dnxf\" (UniqueName: \"kubernetes.io/projected/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-kube-api-access-5dnxf\") pod \"collect-profiles-29467125-g742d\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848624 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/931833dc-0102-4ca9-9a92-129d9a97170b-signing-cabundle\") pod \"service-ca-9c57cc56f-7vnnq\" (UID: \"931833dc-0102-4ca9-9a92-129d9a97170b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848646 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-config\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848669 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848694 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/85c62a01-c818-47b4-92fd-bcd87d8218a8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848718 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-config-volume\") pod \"collect-profiles-29467125-g742d\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848740 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4l4\" (UniqueName: \"kubernetes.io/projected/931833dc-0102-4ca9-9a92-129d9a97170b-kube-api-access-cp4l4\") pod \"service-ca-9c57cc56f-7vnnq\" (UID: \"931833dc-0102-4ca9-9a92-129d9a97170b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848774 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848795 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88zmh\" (UniqueName: \"kubernetes.io/projected/7e755f5b-0a8c-4108-a133-8d5955de3641-kube-api-access-88zmh\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848821 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zsxp\" (UniqueName: \"kubernetes.io/projected/7c3c724b-e4ad-4b31-a6c2-80b82f352ac6-kube-api-access-7zsxp\") pod \"olm-operator-6b444d44fb-d85nm\" (UID: \"7c3c724b-e4ad-4b31-a6c2-80b82f352ac6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848843 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-config\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848864 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-trusted-ca-bundle\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848888 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsbrb\" (UniqueName: \"kubernetes.io/projected/e65db429-f69c-4a8a-982b-0566164c6296-kube-api-access-nsbrb\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848911 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85c62a01-c818-47b4-92fd-bcd87d8218a8-images\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848941 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-service-ca\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.848986 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ca08908-f929-44c1-a0fe-61e028610dd8-proxy-tls\") pod \"machine-config-controller-84d6567774-4fltp\" (UID: \"4ca08908-f929-44c1-a0fe-61e028610dd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849009 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed76ec22-75d0-43c0-b338-5062f922edda-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vhcxf\" (UID: \"ed76ec22-75d0-43c0-b338-5062f922edda\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849031 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849055 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9a401b43-55ea-45de-8fdd-d00354841be3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849079 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849100 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd79abd-0e91-4256-9943-f1b08e35b661-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-pr5qp\" (UID: \"edd79abd-0e91-4256-9943-f1b08e35b661\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849124 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c67349c1-8617-4228-b6e9-009b94caab7a-audit-dir\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849126 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4556783-6235-4338-b56d-50146a186e0d-config\") pod \"kube-apiserver-operator-766d6c64bb-sssf2\" (UID: \"e4556783-6235-4338-b56d-50146a186e0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849145 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/194fadc9-26fd-44d7-84db-14d442ba6dea-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zk2r2\" (UID: \"194fadc9-26fd-44d7-84db-14d442ba6dea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849168 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nhzn\" (UniqueName: \"kubernetes.io/projected/194fadc9-26fd-44d7-84db-14d442ba6dea-kube-api-access-6nhzn\") pod \"openshift-apiserver-operator-796bbdcf4f-zk2r2\" (UID: \"194fadc9-26fd-44d7-84db-14d442ba6dea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849190 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7706c50d-1b6d-4add-b687-3f59c6e080d2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hqwc6\" (UID: \"7706c50d-1b6d-4add-b687-3f59c6e080d2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849233 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-etcd-service-ca\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849256 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbtr4\" (UniqueName: \"kubernetes.io/projected/d05445dd-fa70-4239-af6e-56f88234a35b-kube-api-access-rbtr4\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849280 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c67349c1-8617-4228-b6e9-009b94caab7a-encryption-config\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849308 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n56pv\" (UniqueName: \"kubernetes.io/projected/058be455-c482-4531-9bd8-08b4013e7d3c-kube-api-access-n56pv\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849330 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e755f5b-0a8c-4108-a133-8d5955de3641-node-pullsecrets\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849354 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e65db429-f69c-4a8a-982b-0566164c6296-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849392 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-audit-policies\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849816 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849878 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e65db429-f69c-4a8a-982b-0566164c6296-config\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849906 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e755f5b-0a8c-4108-a133-8d5955de3641-audit-dir\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849822 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c67349c1-8617-4228-b6e9-009b94caab7a-audit-dir\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849933 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7c3c724b-e4ad-4b31-a6c2-80b82f352ac6-srv-cert\") pod \"olm-operator-6b444d44fb-d85nm\" (UID: \"7c3c724b-e4ad-4b31-a6c2-80b82f352ac6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.849981 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gh2g\" (UniqueName: \"kubernetes.io/projected/9a401b43-55ea-45de-8fdd-d00354841be3-kube-api-access-7gh2g\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850007 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrxsz\" (UniqueName: \"kubernetes.io/projected/49455ed9-e8ab-44c8-9075-2ffbbebe36a8-kube-api-access-mrxsz\") pod \"control-plane-machine-set-operator-78cbb6b69f-r9mqp\" (UID: \"49455ed9-e8ab-44c8-9075-2ffbbebe36a8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850032 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5wg6\" (UniqueName: \"kubernetes.io/projected/196f4a49-7ed6-4346-bbc1-3908eb17eadc-kube-api-access-n5wg6\") pod \"dns-operator-744455d44c-hqsj6\" (UID: \"196f4a49-7ed6-4346-bbc1-3908eb17eadc\") " pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850096 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e755f5b-0a8c-4108-a133-8d5955de3641-audit-dir\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850430 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e755f5b-0a8c-4108-a133-8d5955de3641-node-pullsecrets\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850575 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49455ed9-e8ab-44c8-9075-2ffbbebe36a8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-r9mqp\" (UID: \"49455ed9-e8ab-44c8-9075-2ffbbebe36a8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850635 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad4afbe-31ce-4f61-866c-390b33da3bbb-console-serving-cert\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850741 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e65db429-f69c-4a8a-982b-0566164c6296-config\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850751 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wngmk\" (UniqueName: \"kubernetes.io/projected/39b51457-06ae-4cf2-8b78-4f34bb908819-kube-api-access-wngmk\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850795 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e65db429-f69c-4a8a-982b-0566164c6296-service-ca-bundle\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850834 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850908 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlffs\" (UniqueName: \"kubernetes.io/projected/4ca08908-f929-44c1-a0fe-61e028610dd8-kube-api-access-zlffs\") pod \"machine-config-controller-84d6567774-4fltp\" (UID: \"4ca08908-f929-44c1-a0fe-61e028610dd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.850957 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851045 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-etcd-ca\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851174 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e65db429-f69c-4a8a-982b-0566164c6296-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851267 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c67349c1-8617-4228-b6e9-009b94caab7a-audit-policies\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851326 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b51457-06ae-4cf2-8b78-4f34bb908819-serving-cert\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851442 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-etcd-client\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851562 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1adffde5-7978-40ac-8d09-7faff1fae25d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851602 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-serving-cert\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851621 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d05445dd-fa70-4239-af6e-56f88234a35b-audit-dir\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851637 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851666 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pfgf2\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851677 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-trusted-ca-bundle\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851684 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-console-config\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851732 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-client-ca\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851768 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85c62a01-c818-47b4-92fd-bcd87d8218a8-images\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851780 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf5m9\" (UniqueName: \"kubernetes.io/projected/c67349c1-8617-4228-b6e9-009b94caab7a-kube-api-access-zf5m9\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851835 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e755f5b-0a8c-4108-a133-8d5955de3641-encryption-config\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851856 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e65db429-f69c-4a8a-982b-0566164c6296-service-ca-bundle\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851901 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d83df240-de42-4c0d-ba24-ead75566bc23-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x9n66\" (UID: \"d83df240-de42-4c0d-ba24-ead75566bc23\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851941 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c67349c1-8617-4228-b6e9-009b94caab7a-etcd-client\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851961 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e755f5b-0a8c-4108-a133-8d5955de3641-etcd-client\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.851979 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/db211a74-cc64-4820-b576-179f6affa220-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p68qs\" (UID: \"db211a74-cc64-4820-b576-179f6affa220\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852021 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c62a01-c818-47b4-92fd-bcd87d8218a8-config\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852050 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed76ec22-75d0-43c0-b338-5062f922edda-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vhcxf\" (UID: \"ed76ec22-75d0-43c0-b338-5062f922edda\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852097 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194fadc9-26fd-44d7-84db-14d442ba6dea-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zk2r2\" (UID: \"194fadc9-26fd-44d7-84db-14d442ba6dea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852121 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852438 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/db211a74-cc64-4820-b576-179f6affa220-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p68qs\" (UID: \"db211a74-cc64-4820-b576-179f6affa220\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852546 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-console-config\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852748 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/058be455-c482-4531-9bd8-08b4013e7d3c-trusted-ca\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852790 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-service-ca\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852806 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852894 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db211a74-cc64-4820-b576-179f6affa220-serving-cert\") pod \"openshift-config-operator-7777fb866f-p68qs\" (UID: \"db211a74-cc64-4820-b576-179f6affa220\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852901 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c62a01-c818-47b4-92fd-bcd87d8218a8-config\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852938 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c67349c1-8617-4228-b6e9-009b94caab7a-serving-cert\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.852998 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c67349c1-8617-4228-b6e9-009b94caab7a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853026 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x5sg\" (UniqueName: \"kubernetes.io/projected/7c7ec181-e310-4832-af7a-d6a5437e565d-kube-api-access-6x5sg\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853088 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-oauth-serving-cert\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853106 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/196f4a49-7ed6-4346-bbc1-3908eb17eadc-metrics-tls\") pod \"dns-operator-744455d44c-hqsj6\" (UID: \"196f4a49-7ed6-4346-bbc1-3908eb17eadc\") " pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853125 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/058be455-c482-4531-9bd8-08b4013e7d3c-serving-cert\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853173 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-client-ca\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853191 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx4rw\" (UniqueName: \"kubernetes.io/projected/1ad4afbe-31ce-4f61-866c-390b33da3bbb-kube-api-access-fx4rw\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853230 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pfxt\" (UniqueName: \"kubernetes.io/projected/1adffde5-7978-40ac-8d09-7faff1fae25d-kube-api-access-8pfxt\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853255 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058be455-c482-4531-9bd8-08b4013e7d3c-config\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853297 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e755f5b-0a8c-4108-a133-8d5955de3641-serving-cert\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853317 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k4cc\" (UniqueName: \"kubernetes.io/projected/db211a74-cc64-4820-b576-179f6affa220-kube-api-access-6k4cc\") pod \"openshift-config-operator-7777fb866f-p68qs\" (UID: \"db211a74-cc64-4820-b576-179f6affa220\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853337 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4556783-6235-4338-b56d-50146a186e0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-sssf2\" (UID: \"e4556783-6235-4338-b56d-50146a186e0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853355 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7ec181-e310-4832-af7a-d6a5437e565d-serving-cert\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853374 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf2k9\" (UniqueName: \"kubernetes.io/projected/85c62a01-c818-47b4-92fd-bcd87d8218a8-kube-api-access-vf2k9\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853442 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/194fadc9-26fd-44d7-84db-14d442ba6dea-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zk2r2\" (UID: \"194fadc9-26fd-44d7-84db-14d442ba6dea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853937 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-config\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.853957 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/058be455-c482-4531-9bd8-08b4013e7d3c-config\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.854103 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/058be455-c482-4531-9bd8-08b4013e7d3c-trusted-ca\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.854278 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1adffde5-7978-40ac-8d09-7faff1fae25d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.854310 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-image-import-ca\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.854327 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c67349c1-8617-4228-b6e9-009b94caab7a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.854563 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c67349c1-8617-4228-b6e9-009b94caab7a-audit-policies\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.854930 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c67349c1-8617-4228-b6e9-009b94caab7a-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855062 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-image-import-ca\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855233 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-trusted-ca-bundle\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855540 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c67349c1-8617-4228-b6e9-009b94caab7a-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855595 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ca08908-f929-44c1-a0fe-61e028610dd8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4fltp\" (UID: \"4ca08908-f929-44c1-a0fe-61e028610dd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855634 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a401b43-55ea-45de-8fdd-d00354841be3-metrics-tls\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855652 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd79abd-0e91-4256-9943-f1b08e35b661-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-pr5qp\" (UID: \"edd79abd-0e91-4256-9943-f1b08e35b661\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855676 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6g4l\" (UniqueName: \"kubernetes.io/projected/edd79abd-0e91-4256-9943-f1b08e35b661-kube-api-access-x6g4l\") pod \"kube-storage-version-migrator-operator-b67b599dd-pr5qp\" (UID: \"edd79abd-0e91-4256-9943-f1b08e35b661\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855694 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83df240-de42-4c0d-ba24-ead75566bc23-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x9n66\" (UID: \"d83df240-de42-4c0d-ba24-ead75566bc23\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855714 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4556783-6235-4338-b56d-50146a186e0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-sssf2\" (UID: \"e4556783-6235-4338-b56d-50146a186e0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855735 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-config\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855787 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwck2\" (UniqueName: \"kubernetes.io/projected/f5f4931f-9858-4287-9f70-e2bfcb25eabd-kube-api-access-qwck2\") pod \"migrator-59844c95c7-kfptr\" (UID: \"f5f4931f-9858-4287-9f70-e2bfcb25eabd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855806 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-audit\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855824 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-etcd-serving-ca\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855842 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.855860 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.856024 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ad4afbe-31ce-4f61-866c-390b33da3bbb-oauth-serving-cert\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.856252 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.856548 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/85c62a01-c818-47b4-92fd-bcd87d8218a8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.856739 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-config\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.857149 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-client-ca\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.857188 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-audit\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.857304 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-config\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.857725 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/058be455-c482-4531-9bd8-08b4013e7d3c-serving-cert\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.858291 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e65db429-f69c-4a8a-982b-0566164c6296-serving-cert\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.858346 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ca08908-f929-44c1-a0fe-61e028610dd8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-4fltp\" (UID: \"4ca08908-f929-44c1-a0fe-61e028610dd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.858393 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e755f5b-0a8c-4108-a133-8d5955de3641-etcd-serving-ca\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.859379 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c67349c1-8617-4228-b6e9-009b94caab7a-serving-cert\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.859408 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ca08908-f929-44c1-a0fe-61e028610dd8-proxy-tls\") pod \"machine-config-controller-84d6567774-4fltp\" (UID: \"4ca08908-f929-44c1-a0fe-61e028610dd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.859542 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7706c50d-1b6d-4add-b687-3f59c6e080d2-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hqwc6\" (UID: \"7706c50d-1b6d-4add-b687-3f59c6e080d2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.859666 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c67349c1-8617-4228-b6e9-009b94caab7a-encryption-config\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.859882 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e755f5b-0a8c-4108-a133-8d5955de3641-serving-cert\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.859891 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/196f4a49-7ed6-4346-bbc1-3908eb17eadc-metrics-tls\") pod \"dns-operator-744455d44c-hqsj6\" (UID: \"196f4a49-7ed6-4346-bbc1-3908eb17eadc\") " pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.860100 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ad4afbe-31ce-4f61-866c-390b33da3bbb-console-oauth-config\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.860619 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e4556783-6235-4338-b56d-50146a186e0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-sssf2\" (UID: \"e4556783-6235-4338-b56d-50146a186e0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.860798 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db211a74-cc64-4820-b576-179f6affa220-serving-cert\") pod \"openshift-config-operator-7777fb866f-p68qs\" (UID: \"db211a74-cc64-4820-b576-179f6affa220\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.860849 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e755f5b-0a8c-4108-a133-8d5955de3641-encryption-config\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.860904 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ad4afbe-31ce-4f61-866c-390b33da3bbb-console-serving-cert\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.861572 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e755f5b-0a8c-4108-a133-8d5955de3641-etcd-client\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.861684 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c67349c1-8617-4228-b6e9-009b94caab7a-etcd-client\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.861821 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7ec181-e310-4832-af7a-d6a5437e565d-serving-cert\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.861858 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b51457-06ae-4cf2-8b78-4f34bb908819-serving-cert\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.865613 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/194fadc9-26fd-44d7-84db-14d442ba6dea-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zk2r2\" (UID: \"194fadc9-26fd-44d7-84db-14d442ba6dea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.867443 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.887382 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.907592 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.927815 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.948315 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.956674 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a401b43-55ea-45de-8fdd-d00354841be3-metrics-tls\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.956731 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd79abd-0e91-4256-9943-f1b08e35b661-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-pr5qp\" (UID: \"edd79abd-0e91-4256-9943-f1b08e35b661\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.956772 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6g4l\" (UniqueName: \"kubernetes.io/projected/edd79abd-0e91-4256-9943-f1b08e35b661-kube-api-access-x6g4l\") pod \"kube-storage-version-migrator-operator-b67b599dd-pr5qp\" (UID: \"edd79abd-0e91-4256-9943-f1b08e35b661\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.956813 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83df240-de42-4c0d-ba24-ead75566bc23-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x9n66\" (UID: \"d83df240-de42-4c0d-ba24-ead75566bc23\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.956859 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwck2\" (UniqueName: \"kubernetes.io/projected/f5f4931f-9858-4287-9f70-e2bfcb25eabd-kube-api-access-qwck2\") pod \"migrator-59844c95c7-kfptr\" (UID: \"f5f4931f-9858-4287-9f70-e2bfcb25eabd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.956901 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.956935 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.956969 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a401b43-55ea-45de-8fdd-d00354841be3-trusted-ca\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.957024 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pct9c\" (UniqueName: \"kubernetes.io/projected/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-kube-api-access-pct9c\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.957065 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83df240-de42-4c0d-ba24-ead75566bc23-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x9n66\" (UID: \"d83df240-de42-4c0d-ba24-ead75566bc23\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.957098 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7c3c724b-e4ad-4b31-a6c2-80b82f352ac6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-d85nm\" (UID: \"7c3c724b-e4ad-4b31-a6c2-80b82f352ac6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.957138 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tftkn\" (UniqueName: \"kubernetes.io/projected/755f846b-01e4-435d-b86c-cbe3f917aa31-kube-api-access-tftkn\") pod \"marketplace-operator-79b997595-pfgf2\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.957167 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-secret-volume\") pod \"collect-profiles-29467125-g742d\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.957233 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.957279 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1adffde5-7978-40ac-8d09-7faff1fae25d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.957314 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pfgf2\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.957344 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/931833dc-0102-4ca9-9a92-129d9a97170b-signing-key\") pod \"service-ca-9c57cc56f-7vnnq\" (UID: \"931833dc-0102-4ca9-9a92-129d9a97170b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.957386 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed76ec22-75d0-43c0-b338-5062f922edda-config\") pod \"kube-controller-manager-operator-78b949d7b-vhcxf\" (UID: \"ed76ec22-75d0-43c0-b338-5062f922edda\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.957415 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dnxf\" (UniqueName: \"kubernetes.io/projected/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-kube-api-access-5dnxf\") pod \"collect-profiles-29467125-g742d\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.960830 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed76ec22-75d0-43c0-b338-5062f922edda-config\") pod \"kube-controller-manager-operator-78b949d7b-vhcxf\" (UID: \"ed76ec22-75d0-43c0-b338-5062f922edda\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.959663 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/931833dc-0102-4ca9-9a92-129d9a97170b-signing-cabundle\") pod \"service-ca-9c57cc56f-7vnnq\" (UID: \"931833dc-0102-4ca9-9a92-129d9a97170b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.961280 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-config\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.961319 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.961365 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4l4\" (UniqueName: \"kubernetes.io/projected/931833dc-0102-4ca9-9a92-129d9a97170b-kube-api-access-cp4l4\") pod \"service-ca-9c57cc56f-7vnnq\" (UID: \"931833dc-0102-4ca9-9a92-129d9a97170b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.961407 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-config-volume\") pod \"collect-profiles-29467125-g742d\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.961733 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zsxp\" (UniqueName: \"kubernetes.io/projected/7c3c724b-e4ad-4b31-a6c2-80b82f352ac6-kube-api-access-7zsxp\") pod \"olm-operator-6b444d44fb-d85nm\" (UID: \"7c3c724b-e4ad-4b31-a6c2-80b82f352ac6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.961986 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed76ec22-75d0-43c0-b338-5062f922edda-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vhcxf\" (UID: \"ed76ec22-75d0-43c0-b338-5062f922edda\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.962037 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.962082 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9a401b43-55ea-45de-8fdd-d00354841be3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.962387 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd79abd-0e91-4256-9943-f1b08e35b661-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-pr5qp\" (UID: \"edd79abd-0e91-4256-9943-f1b08e35b661\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.962461 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.962538 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-etcd-service-ca\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.962603 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbtr4\" (UniqueName: \"kubernetes.io/projected/d05445dd-fa70-4239-af6e-56f88234a35b-kube-api-access-rbtr4\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.962699 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-audit-policies\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.962798 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.962858 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7c3c724b-e4ad-4b31-a6c2-80b82f352ac6-srv-cert\") pod \"olm-operator-6b444d44fb-d85nm\" (UID: \"7c3c724b-e4ad-4b31-a6c2-80b82f352ac6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963093 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gh2g\" (UniqueName: \"kubernetes.io/projected/9a401b43-55ea-45de-8fdd-d00354841be3-kube-api-access-7gh2g\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963180 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrxsz\" (UniqueName: \"kubernetes.io/projected/49455ed9-e8ab-44c8-9075-2ffbbebe36a8-kube-api-access-mrxsz\") pod \"control-plane-machine-set-operator-78cbb6b69f-r9mqp\" (UID: \"49455ed9-e8ab-44c8-9075-2ffbbebe36a8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963268 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49455ed9-e8ab-44c8-9075-2ffbbebe36a8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-r9mqp\" (UID: \"49455ed9-e8ab-44c8-9075-2ffbbebe36a8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963348 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963438 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-etcd-ca\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963498 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-etcd-client\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963538 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1adffde5-7978-40ac-8d09-7faff1fae25d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963581 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-serving-cert\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963638 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963844 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d05445dd-fa70-4239-af6e-56f88234a35b-audit-dir\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963912 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.963958 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pfgf2\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.964094 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d83df240-de42-4c0d-ba24-ead75566bc23-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x9n66\" (UID: \"d83df240-de42-4c0d-ba24-ead75566bc23\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.964172 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed76ec22-75d0-43c0-b338-5062f922edda-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vhcxf\" (UID: \"ed76ec22-75d0-43c0-b338-5062f922edda\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.964274 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.964328 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.964519 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pfxt\" (UniqueName: \"kubernetes.io/projected/1adffde5-7978-40ac-8d09-7faff1fae25d-kube-api-access-8pfxt\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.964612 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1adffde5-7978-40ac-8d09-7faff1fae25d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.964598 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-audit-policies\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.966382 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7c3c724b-e4ad-4b31-a6c2-80b82f352ac6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-d85nm\" (UID: \"7c3c724b-e4ad-4b31-a6c2-80b82f352ac6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.966455 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d05445dd-fa70-4239-af6e-56f88234a35b-audit-dir\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.967512 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-secret-volume\") pod \"collect-profiles-29467125-g742d\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.967641 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.967974 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.968242 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.969229 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.969789 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1adffde5-7978-40ac-8d09-7faff1fae25d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.972806 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.973364 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.973630 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7c3c724b-e4ad-4b31-a6c2-80b82f352ac6-srv-cert\") pod \"olm-operator-6b444d44fb-d85nm\" (UID: \"7c3c724b-e4ad-4b31-a6c2-80b82f352ac6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.974697 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.974867 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed76ec22-75d0-43c0-b338-5062f922edda-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vhcxf\" (UID: \"ed76ec22-75d0-43c0-b338-5062f922edda\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.974890 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.975112 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-etcd-client\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.975971 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-serving-cert\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.976027 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1adffde5-7978-40ac-8d09-7faff1fae25d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.976363 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.977475 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.987248 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.993379 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-config\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.994382 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-client-ca\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:33 crc kubenswrapper[4810]: I0110 06:48:33.994529 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.008091 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.015516 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-etcd-ca\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.028129 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.033703 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-etcd-service-ca\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.047104 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.067175 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.087141 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.107649 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.135032 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.138247 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a401b43-55ea-45de-8fdd-d00354841be3-trusted-ca\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.147728 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.168626 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.180539 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a401b43-55ea-45de-8fdd-d00354841be3-metrics-tls\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.187346 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.207121 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.227994 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.246876 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.251052 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83df240-de42-4c0d-ba24-ead75566bc23-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x9n66\" (UID: \"d83df240-de42-4c0d-ba24-ead75566bc23\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.268041 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.277716 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83df240-de42-4c0d-ba24-ead75566bc23-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x9n66\" (UID: \"d83df240-de42-4c0d-ba24-ead75566bc23\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.308128 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.327296 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.346844 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.368471 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.387296 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.408023 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.427102 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.447322 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.452858 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/931833dc-0102-4ca9-9a92-129d9a97170b-signing-key\") pod \"service-ca-9c57cc56f-7vnnq\" (UID: \"931833dc-0102-4ca9-9a92-129d9a97170b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.467284 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.477988 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/49455ed9-e8ab-44c8-9075-2ffbbebe36a8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-r9mqp\" (UID: \"49455ed9-e8ab-44c8-9075-2ffbbebe36a8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.487124 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.507116 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.522842 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pfgf2\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.528451 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.537966 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd79abd-0e91-4256-9943-f1b08e35b661-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-pr5qp\" (UID: \"edd79abd-0e91-4256-9943-f1b08e35b661\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.547887 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.600274 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd79abd-0e91-4256-9943-f1b08e35b661-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-pr5qp\" (UID: \"edd79abd-0e91-4256-9943-f1b08e35b661\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.601241 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.601969 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.607215 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.611107 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/931833dc-0102-4ca9-9a92-129d9a97170b-signing-cabundle\") pod \"service-ca-9c57cc56f-7vnnq\" (UID: \"931833dc-0102-4ca9-9a92-129d9a97170b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.627707 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.647972 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.667272 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.688071 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.715562 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.721872 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pfgf2\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.728337 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.734546 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-config-volume\") pod \"collect-profiles-29467125-g742d\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.748096 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.765724 4810 request.go:700] Waited for 1.018246465s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&limit=500&resourceVersion=0 Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.767030 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.807950 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.827660 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.847847 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.868464 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.887356 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.907758 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.927626 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.946678 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.967403 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 10 06:48:34 crc kubenswrapper[4810]: I0110 06:48:34.987996 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.007738 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.027469 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.047007 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.067378 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.088056 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.107585 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.127433 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.147133 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.168041 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.188521 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.207931 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.228535 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.248643 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.268899 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.287869 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.308691 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.328587 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.347938 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.367706 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.387459 4810 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.407520 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.427221 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.447365 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.467538 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.488464 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.508144 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.528606 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.547303 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.567991 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.586812 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.608528 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.648827 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scrmj\" (UniqueName: \"kubernetes.io/projected/845a1b6a-e7d4-467d-a835-053708fed54f-kube-api-access-scrmj\") pod \"downloads-7954f5f757-xw9ln\" (UID: \"845a1b6a-e7d4-467d-a835-053708fed54f\") " pod="openshift-console/downloads-7954f5f757-xw9ln" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.668646 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88zmh\" (UniqueName: \"kubernetes.io/projected/7e755f5b-0a8c-4108-a133-8d5955de3641-kube-api-access-88zmh\") pod \"apiserver-76f77b778f-9dnp6\" (UID: \"7e755f5b-0a8c-4108-a133-8d5955de3641\") " pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.688502 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvp65\" (UniqueName: \"kubernetes.io/projected/7706c50d-1b6d-4add-b687-3f59c6e080d2-kube-api-access-pvp65\") pod \"openshift-controller-manager-operator-756b6f6bc6-hqwc6\" (UID: \"7706c50d-1b6d-4add-b687-3f59c6e080d2\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.716047 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsbrb\" (UniqueName: \"kubernetes.io/projected/e65db429-f69c-4a8a-982b-0566164c6296-kube-api-access-nsbrb\") pod \"authentication-operator-69f744f599-nxhxd\" (UID: \"e65db429-f69c-4a8a-982b-0566164c6296\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.726867 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n56pv\" (UniqueName: \"kubernetes.io/projected/058be455-c482-4531-9bd8-08b4013e7d3c-kube-api-access-n56pv\") pod \"console-operator-58897d9998-km9lv\" (UID: \"058be455-c482-4531-9bd8-08b4013e7d3c\") " pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.742021 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.746576 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nhzn\" (UniqueName: \"kubernetes.io/projected/194fadc9-26fd-44d7-84db-14d442ba6dea-kube-api-access-6nhzn\") pod \"openshift-apiserver-operator-796bbdcf4f-zk2r2\" (UID: \"194fadc9-26fd-44d7-84db-14d442ba6dea\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.765995 4810 request.go:700] Waited for 1.914963445s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.766634 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5wg6\" (UniqueName: \"kubernetes.io/projected/196f4a49-7ed6-4346-bbc1-3908eb17eadc-kube-api-access-n5wg6\") pod \"dns-operator-744455d44c-hqsj6\" (UID: \"196f4a49-7ed6-4346-bbc1-3908eb17eadc\") " pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.792296 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wngmk\" (UniqueName: \"kubernetes.io/projected/39b51457-06ae-4cf2-8b78-4f34bb908819-kube-api-access-wngmk\") pod \"route-controller-manager-6576b87f9c-kt27g\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.794953 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.806407 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlffs\" (UniqueName: \"kubernetes.io/projected/4ca08908-f929-44c1-a0fe-61e028610dd8-kube-api-access-zlffs\") pod \"machine-config-controller-84d6567774-4fltp\" (UID: \"4ca08908-f929-44c1-a0fe-61e028610dd8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.824705 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf5m9\" (UniqueName: \"kubernetes.io/projected/c67349c1-8617-4228-b6e9-009b94caab7a-kube-api-access-zf5m9\") pod \"apiserver-7bbb656c7d-wf9f4\" (UID: \"c67349c1-8617-4228-b6e9-009b94caab7a\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.841770 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k4cc\" (UniqueName: \"kubernetes.io/projected/db211a74-cc64-4820-b576-179f6affa220-kube-api-access-6k4cc\") pod \"openshift-config-operator-7777fb866f-p68qs\" (UID: \"db211a74-cc64-4820-b576-179f6affa220\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.850743 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.860878 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.877029 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf2k9\" (UniqueName: \"kubernetes.io/projected/85c62a01-c818-47b4-92fd-bcd87d8218a8-kube-api-access-vf2k9\") pod \"machine-api-operator-5694c8668f-t88kw\" (UID: \"85c62a01-c818-47b4-92fd-bcd87d8218a8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.884372 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.896456 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x5sg\" (UniqueName: \"kubernetes.io/projected/7c7ec181-e310-4832-af7a-d6a5437e565d-kube-api-access-6x5sg\") pod \"controller-manager-879f6c89f-mwwrl\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.898828 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.921554 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.922119 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e4556783-6235-4338-b56d-50146a186e0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-sssf2\" (UID: \"e4556783-6235-4338-b56d-50146a186e0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.927117 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx4rw\" (UniqueName: \"kubernetes.io/projected/1ad4afbe-31ce-4f61-866c-390b33da3bbb-kube-api-access-fx4rw\") pod \"console-f9d7485db-mh9w2\" (UID: \"1ad4afbe-31ce-4f61-866c-390b33da3bbb\") " pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.941180 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xw9ln" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.941651 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6g4l\" (UniqueName: \"kubernetes.io/projected/edd79abd-0e91-4256-9943-f1b08e35b661-kube-api-access-x6g4l\") pod \"kube-storage-version-migrator-operator-b67b599dd-pr5qp\" (UID: \"edd79abd-0e91-4256-9943-f1b08e35b661\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.952456 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.965568 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwck2\" (UniqueName: \"kubernetes.io/projected/f5f4931f-9858-4287-9f70-e2bfcb25eabd-kube-api-access-qwck2\") pod \"migrator-59844c95c7-kfptr\" (UID: \"f5f4931f-9858-4287-9f70-e2bfcb25eabd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.977515 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.981377 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pct9c\" (UniqueName: \"kubernetes.io/projected/9da8b1a0-cfbe-4a73-a760-6a3679c58ba3-kube-api-access-pct9c\") pod \"etcd-operator-b45778765-fpl4m\" (UID: \"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.990485 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" Jan 10 06:48:35 crc kubenswrapper[4810]: I0110 06:48:35.999761 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.000947 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tftkn\" (UniqueName: \"kubernetes.io/projected/755f846b-01e4-435d-b86c-cbe3f917aa31-kube-api-access-tftkn\") pod \"marketplace-operator-79b997595-pfgf2\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.001185 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.021779 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dnxf\" (UniqueName: \"kubernetes.io/projected/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-kube-api-access-5dnxf\") pod \"collect-profiles-29467125-g742d\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.038108 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.041233 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zsxp\" (UniqueName: \"kubernetes.io/projected/7c3c724b-e4ad-4b31-a6c2-80b82f352ac6-kube-api-access-7zsxp\") pod \"olm-operator-6b444d44fb-d85nm\" (UID: \"7c3c724b-e4ad-4b31-a6c2-80b82f352ac6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.062434 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed76ec22-75d0-43c0-b338-5062f922edda-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vhcxf\" (UID: \"ed76ec22-75d0-43c0-b338-5062f922edda\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.072882 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.081891 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4l4\" (UniqueName: \"kubernetes.io/projected/931833dc-0102-4ca9-9a92-129d9a97170b-kube-api-access-cp4l4\") pod \"service-ca-9c57cc56f-7vnnq\" (UID: \"931833dc-0102-4ca9-9a92-129d9a97170b\") " pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.094379 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.101550 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9a401b43-55ea-45de-8fdd-d00354841be3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.127013 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gh2g\" (UniqueName: \"kubernetes.io/projected/9a401b43-55ea-45de-8fdd-d00354841be3-kube-api-access-7gh2g\") pod \"ingress-operator-5b745b69d9-g5s7j\" (UID: \"9a401b43-55ea-45de-8fdd-d00354841be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.139565 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1adffde5-7978-40ac-8d09-7faff1fae25d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.139968 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.153540 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.166492 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.172597 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.180542 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrxsz\" (UniqueName: \"kubernetes.io/projected/49455ed9-e8ab-44c8-9075-2ffbbebe36a8-kube-api-access-mrxsz\") pod \"control-plane-machine-set-operator-78cbb6b69f-r9mqp\" (UID: \"49455ed9-e8ab-44c8-9075-2ffbbebe36a8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.195849 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbtr4\" (UniqueName: \"kubernetes.io/projected/d05445dd-fa70-4239-af6e-56f88234a35b-kube-api-access-rbtr4\") pod \"oauth-openshift-558db77b4-gx7wx\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.205853 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.214746 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d83df240-de42-4c0d-ba24-ead75566bc23-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x9n66\" (UID: \"d83df240-de42-4c0d-ba24-ead75566bc23\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.228729 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pfxt\" (UniqueName: \"kubernetes.io/projected/1adffde5-7978-40ac-8d09-7faff1fae25d-kube-api-access-8pfxt\") pod \"cluster-image-registry-operator-dc59b4c8b-mbcrm\" (UID: \"1adffde5-7978-40ac-8d09-7faff1fae25d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.307726 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.313837 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.322783 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.326956 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8d04d07-f61c-4c6c-9038-7cca9d199ede-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.326996 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nrzgt\" (UID: \"f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.327044 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.327074 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-tls\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.327104 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-trusted-ca\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.327128 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-bound-sa-token\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.327147 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-certificates\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.327197 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr56n\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-kube-api-access-sr56n\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.327265 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp9fp\" (UniqueName: \"kubernetes.io/projected/f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0-kube-api-access-hp9fp\") pod \"multus-admission-controller-857f4d67dd-nrzgt\" (UID: \"f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.327470 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8d04d07-f61c-4c6c-9038-7cca9d199ede-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: E0110 06:48:36.327497 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:36.827473591 +0000 UTC m=+145.442966694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.329794 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.353506 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.422789 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.429776 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430164 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-bound-sa-token\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430221 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbgrm\" (UniqueName: \"kubernetes.io/projected/8806fc90-52a1-4bbb-a75e-231e034ea87c-kube-api-access-qbgrm\") pod \"catalog-operator-68c6474976-pbshr\" (UID: \"8806fc90-52a1-4bbb-a75e-231e034ea87c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430290 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-certificates\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430351 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/655c215c-110b-4504-993a-2263e6462e2b-config-volume\") pod \"dns-default-qxj7l\" (UID: \"655c215c-110b-4504-993a-2263e6462e2b\") " pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430372 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/71321d16-a836-4ac1-a8fb-a90f80807174-certs\") pod \"machine-config-server-wnzzr\" (UID: \"71321d16-a836-4ac1-a8fb-a90f80807174\") " pod="openshift-machine-config-operator/machine-config-server-wnzzr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430430 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-plugins-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430468 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-registration-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430492 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjc8q\" (UniqueName: \"kubernetes.io/projected/3767b512-d183-4e69-9534-d2ad2ad5e1c1-kube-api-access-wjc8q\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430564 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-csi-data-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430642 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvd7\" (UniqueName: \"kubernetes.io/projected/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-kube-api-access-txvd7\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430664 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr56n\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-kube-api-access-sr56n\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430703 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-socket-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430721 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbc78376-08c0-4d09-a627-ca17eec0ceb3-metrics-certs\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430750 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmz76\" (UniqueName: \"kubernetes.io/projected/88e8ea65-2a94-43b3-85ec-0a3583cecb40-kube-api-access-vmz76\") pod \"service-ca-operator-777779d784-fddzq\" (UID: \"88e8ea65-2a94-43b3-85ec-0a3583cecb40\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430771 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7r65\" (UniqueName: \"kubernetes.io/projected/55e33629-ccfb-4893-a062-dab5d945138f-kube-api-access-f7r65\") pod \"cluster-samples-operator-665b6dd947-6v9fx\" (UID: \"55e33629-ccfb-4893-a062-dab5d945138f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430815 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4g8c\" (UniqueName: \"kubernetes.io/projected/655c215c-110b-4504-993a-2263e6462e2b-kube-api-access-w4g8c\") pod \"dns-default-qxj7l\" (UID: \"655c215c-110b-4504-993a-2263e6462e2b\") " pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430854 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8806fc90-52a1-4bbb-a75e-231e034ea87c-srv-cert\") pod \"catalog-operator-68c6474976-pbshr\" (UID: \"8806fc90-52a1-4bbb-a75e-231e034ea87c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430917 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp9fp\" (UniqueName: \"kubernetes.io/projected/f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0-kube-api-access-hp9fp\") pod \"multus-admission-controller-857f4d67dd-nrzgt\" (UID: \"f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430939 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/655c215c-110b-4504-993a-2263e6462e2b-metrics-tls\") pod \"dns-default-qxj7l\" (UID: \"655c215c-110b-4504-993a-2263e6462e2b\") " pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.430967 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cbc78376-08c0-4d09-a627-ca17eec0ceb3-stats-auth\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431003 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cbc78376-08c0-4d09-a627-ca17eec0ceb3-default-certificate\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431058 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/71321d16-a836-4ac1-a8fb-a90f80807174-node-bootstrap-token\") pod \"machine-config-server-wnzzr\" (UID: \"71321d16-a836-4ac1-a8fb-a90f80807174\") " pod="openshift-machine-config-operator/machine-config-server-wnzzr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431105 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e8ea65-2a94-43b3-85ec-0a3583cecb40-config\") pod \"service-ca-operator-777779d784-fddzq\" (UID: \"88e8ea65-2a94-43b3-85ec-0a3583cecb40\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431123 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/55e33629-ccfb-4893-a062-dab5d945138f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6v9fx\" (UID: \"55e33629-ccfb-4893-a062-dab5d945138f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431140 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88e8ea65-2a94-43b3-85ec-0a3583cecb40-serving-cert\") pod \"service-ca-operator-777779d784-fddzq\" (UID: \"88e8ea65-2a94-43b3-85ec-0a3583cecb40\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431159 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6dwt\" (UniqueName: \"kubernetes.io/projected/cbc78376-08c0-4d09-a627-ca17eec0ceb3-kube-api-access-s6dwt\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431182 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lq2s\" (UniqueName: \"kubernetes.io/projected/1cc68900-ee14-4825-b4c8-375873200016-kube-api-access-8lq2s\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431232 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/351d10cd-c856-424c-a051-a0b4ffcc26a5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4hwtr\" (UID: \"351d10cd-c856-424c-a051-a0b4ffcc26a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431273 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8d04d07-f61c-4c6c-9038-7cca9d199ede-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431291 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e25a5c57-ddfb-4b84-aec6-8a512656f614-proxy-tls\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431349 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3767b512-d183-4e69-9534-d2ad2ad5e1c1-machine-approver-tls\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431452 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8d04d07-f61c-4c6c-9038-7cca9d199ede-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431469 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nrzgt\" (UID: \"f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431487 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e25a5c57-ddfb-4b84-aec6-8a512656f614-images\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431503 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc68900-ee14-4825-b4c8-375873200016-webhook-cert\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431541 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3767b512-d183-4e69-9534-d2ad2ad5e1c1-auth-proxy-config\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431581 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cffc4072-d68e-43c1-b8f5-eb6058d11aa9-cert\") pod \"ingress-canary-q59cx\" (UID: \"cffc4072-d68e-43c1-b8f5-eb6058d11aa9\") " pod="openshift-ingress-canary/ingress-canary-q59cx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431611 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q9kn\" (UniqueName: \"kubernetes.io/projected/351d10cd-c856-424c-a051-a0b4ffcc26a5-kube-api-access-2q9kn\") pod \"package-server-manager-789f6589d5-4hwtr\" (UID: \"351d10cd-c856-424c-a051-a0b4ffcc26a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431628 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431657 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8806fc90-52a1-4bbb-a75e-231e034ea87c-profile-collector-cert\") pod \"catalog-operator-68c6474976-pbshr\" (UID: \"8806fc90-52a1-4bbb-a75e-231e034ea87c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431750 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1cc68900-ee14-4825-b4c8-375873200016-tmpfs\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431771 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-tls\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431788 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbc78376-08c0-4d09-a627-ca17eec0ceb3-service-ca-bundle\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431828 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e25a5c57-ddfb-4b84-aec6-8a512656f614-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431855 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3767b512-d183-4e69-9534-d2ad2ad5e1c1-config\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431916 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sx5g\" (UniqueName: \"kubernetes.io/projected/71321d16-a836-4ac1-a8fb-a90f80807174-kube-api-access-5sx5g\") pod \"machine-config-server-wnzzr\" (UID: \"71321d16-a836-4ac1-a8fb-a90f80807174\") " pod="openshift-machine-config-operator/machine-config-server-wnzzr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431958 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc68900-ee14-4825-b4c8-375873200016-apiservice-cert\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.431977 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrbpb\" (UniqueName: \"kubernetes.io/projected/e25a5c57-ddfb-4b84-aec6-8a512656f614-kube-api-access-xrbpb\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.432029 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-trusted-ca\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.432048 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jksgp\" (UniqueName: \"kubernetes.io/projected/cffc4072-d68e-43c1-b8f5-eb6058d11aa9-kube-api-access-jksgp\") pod \"ingress-canary-q59cx\" (UID: \"cffc4072-d68e-43c1-b8f5-eb6058d11aa9\") " pod="openshift-ingress-canary/ingress-canary-q59cx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.432065 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-mountpoint-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: E0110 06:48:36.432242 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:36.93218331 +0000 UTC m=+145.547676193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.436658 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-certificates\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.438534 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8d04d07-f61c-4c6c-9038-7cca9d199ede-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.444348 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-trusted-ca\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.454614 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-tls\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.454640 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nrzgt\" (UID: \"f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.455116 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8d04d07-f61c-4c6c-9038-7cca9d199ede-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.463353 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-bound-sa-token\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.483660 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp9fp\" (UniqueName: \"kubernetes.io/projected/f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0-kube-api-access-hp9fp\") pod \"multus-admission-controller-857f4d67dd-nrzgt\" (UID: \"f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.527273 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr56n\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-kube-api-access-sr56n\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.533424 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-csi-data-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.533462 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvd7\" (UniqueName: \"kubernetes.io/projected/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-kube-api-access-txvd7\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.533495 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-socket-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534292 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbc78376-08c0-4d09-a627-ca17eec0ceb3-metrics-certs\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534330 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7r65\" (UniqueName: \"kubernetes.io/projected/55e33629-ccfb-4893-a062-dab5d945138f-kube-api-access-f7r65\") pod \"cluster-samples-operator-665b6dd947-6v9fx\" (UID: \"55e33629-ccfb-4893-a062-dab5d945138f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534356 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmz76\" (UniqueName: \"kubernetes.io/projected/88e8ea65-2a94-43b3-85ec-0a3583cecb40-kube-api-access-vmz76\") pod \"service-ca-operator-777779d784-fddzq\" (UID: \"88e8ea65-2a94-43b3-85ec-0a3583cecb40\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534388 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4g8c\" (UniqueName: \"kubernetes.io/projected/655c215c-110b-4504-993a-2263e6462e2b-kube-api-access-w4g8c\") pod \"dns-default-qxj7l\" (UID: \"655c215c-110b-4504-993a-2263e6462e2b\") " pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534412 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8806fc90-52a1-4bbb-a75e-231e034ea87c-srv-cert\") pod \"catalog-operator-68c6474976-pbshr\" (UID: \"8806fc90-52a1-4bbb-a75e-231e034ea87c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534435 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/655c215c-110b-4504-993a-2263e6462e2b-metrics-tls\") pod \"dns-default-qxj7l\" (UID: \"655c215c-110b-4504-993a-2263e6462e2b\") " pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534468 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cbc78376-08c0-4d09-a627-ca17eec0ceb3-stats-auth\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534489 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cbc78376-08c0-4d09-a627-ca17eec0ceb3-default-certificate\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534514 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/71321d16-a836-4ac1-a8fb-a90f80807174-node-bootstrap-token\") pod \"machine-config-server-wnzzr\" (UID: \"71321d16-a836-4ac1-a8fb-a90f80807174\") " pod="openshift-machine-config-operator/machine-config-server-wnzzr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534538 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e8ea65-2a94-43b3-85ec-0a3583cecb40-config\") pod \"service-ca-operator-777779d784-fddzq\" (UID: \"88e8ea65-2a94-43b3-85ec-0a3583cecb40\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534559 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/55e33629-ccfb-4893-a062-dab5d945138f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6v9fx\" (UID: \"55e33629-ccfb-4893-a062-dab5d945138f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534582 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88e8ea65-2a94-43b3-85ec-0a3583cecb40-serving-cert\") pod \"service-ca-operator-777779d784-fddzq\" (UID: \"88e8ea65-2a94-43b3-85ec-0a3583cecb40\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534604 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6dwt\" (UniqueName: \"kubernetes.io/projected/cbc78376-08c0-4d09-a627-ca17eec0ceb3-kube-api-access-s6dwt\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534627 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lq2s\" (UniqueName: \"kubernetes.io/projected/1cc68900-ee14-4825-b4c8-375873200016-kube-api-access-8lq2s\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534655 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/351d10cd-c856-424c-a051-a0b4ffcc26a5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4hwtr\" (UID: \"351d10cd-c856-424c-a051-a0b4ffcc26a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534682 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e25a5c57-ddfb-4b84-aec6-8a512656f614-proxy-tls\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534706 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3767b512-d183-4e69-9534-d2ad2ad5e1c1-machine-approver-tls\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534736 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e25a5c57-ddfb-4b84-aec6-8a512656f614-images\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534761 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc68900-ee14-4825-b4c8-375873200016-webhook-cert\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534784 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3767b512-d183-4e69-9534-d2ad2ad5e1c1-auth-proxy-config\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534803 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cffc4072-d68e-43c1-b8f5-eb6058d11aa9-cert\") pod \"ingress-canary-q59cx\" (UID: \"cffc4072-d68e-43c1-b8f5-eb6058d11aa9\") " pod="openshift-ingress-canary/ingress-canary-q59cx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534880 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q9kn\" (UniqueName: \"kubernetes.io/projected/351d10cd-c856-424c-a051-a0b4ffcc26a5-kube-api-access-2q9kn\") pod \"package-server-manager-789f6589d5-4hwtr\" (UID: \"351d10cd-c856-424c-a051-a0b4ffcc26a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534908 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534932 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8806fc90-52a1-4bbb-a75e-231e034ea87c-profile-collector-cert\") pod \"catalog-operator-68c6474976-pbshr\" (UID: \"8806fc90-52a1-4bbb-a75e-231e034ea87c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534958 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1cc68900-ee14-4825-b4c8-375873200016-tmpfs\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.534981 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbc78376-08c0-4d09-a627-ca17eec0ceb3-service-ca-bundle\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535005 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e25a5c57-ddfb-4b84-aec6-8a512656f614-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535057 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3767b512-d183-4e69-9534-d2ad2ad5e1c1-config\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535079 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sx5g\" (UniqueName: \"kubernetes.io/projected/71321d16-a836-4ac1-a8fb-a90f80807174-kube-api-access-5sx5g\") pod \"machine-config-server-wnzzr\" (UID: \"71321d16-a836-4ac1-a8fb-a90f80807174\") " pod="openshift-machine-config-operator/machine-config-server-wnzzr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535102 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc68900-ee14-4825-b4c8-375873200016-apiservice-cert\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535124 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrbpb\" (UniqueName: \"kubernetes.io/projected/e25a5c57-ddfb-4b84-aec6-8a512656f614-kube-api-access-xrbpb\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535146 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jksgp\" (UniqueName: \"kubernetes.io/projected/cffc4072-d68e-43c1-b8f5-eb6058d11aa9-kube-api-access-jksgp\") pod \"ingress-canary-q59cx\" (UID: \"cffc4072-d68e-43c1-b8f5-eb6058d11aa9\") " pod="openshift-ingress-canary/ingress-canary-q59cx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535165 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-mountpoint-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535187 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbgrm\" (UniqueName: \"kubernetes.io/projected/8806fc90-52a1-4bbb-a75e-231e034ea87c-kube-api-access-qbgrm\") pod \"catalog-operator-68c6474976-pbshr\" (UID: \"8806fc90-52a1-4bbb-a75e-231e034ea87c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535244 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/655c215c-110b-4504-993a-2263e6462e2b-config-volume\") pod \"dns-default-qxj7l\" (UID: \"655c215c-110b-4504-993a-2263e6462e2b\") " pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535267 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/71321d16-a836-4ac1-a8fb-a90f80807174-certs\") pod \"machine-config-server-wnzzr\" (UID: \"71321d16-a836-4ac1-a8fb-a90f80807174\") " pod="openshift-machine-config-operator/machine-config-server-wnzzr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535300 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-plugins-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.533585 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-csi-data-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535324 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-registration-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.535346 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjc8q\" (UniqueName: \"kubernetes.io/projected/3767b512-d183-4e69-9534-d2ad2ad5e1c1-kube-api-access-wjc8q\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.533927 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-socket-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: E0110 06:48:36.535898 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.035885902 +0000 UTC m=+145.651378785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.536751 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e25a5c57-ddfb-4b84-aec6-8a512656f614-auth-proxy-config\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.537059 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1cc68900-ee14-4825-b4c8-375873200016-tmpfs\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.538169 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3767b512-d183-4e69-9534-d2ad2ad5e1c1-config\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.538335 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbc78376-08c0-4d09-a627-ca17eec0ceb3-service-ca-bundle\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.538889 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/655c215c-110b-4504-993a-2263e6462e2b-config-volume\") pod \"dns-default-qxj7l\" (UID: \"655c215c-110b-4504-993a-2263e6462e2b\") " pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.539012 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-mountpoint-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.539520 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e25a5c57-ddfb-4b84-aec6-8a512656f614-images\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.540231 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3767b512-d183-4e69-9534-d2ad2ad5e1c1-auth-proxy-config\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.540254 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8806fc90-52a1-4bbb-a75e-231e034ea87c-srv-cert\") pod \"catalog-operator-68c6474976-pbshr\" (UID: \"8806fc90-52a1-4bbb-a75e-231e034ea87c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.540489 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-plugins-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.540678 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/655c215c-110b-4504-993a-2263e6462e2b-metrics-tls\") pod \"dns-default-qxj7l\" (UID: \"655c215c-110b-4504-993a-2263e6462e2b\") " pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.540762 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-registration-dir\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.541653 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cbc78376-08c0-4d09-a627-ca17eec0ceb3-stats-auth\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.542518 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e8ea65-2a94-43b3-85ec-0a3583cecb40-config\") pod \"service-ca-operator-777779d784-fddzq\" (UID: \"88e8ea65-2a94-43b3-85ec-0a3583cecb40\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.542967 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3767b512-d183-4e69-9534-d2ad2ad5e1c1-machine-approver-tls\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.543093 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8806fc90-52a1-4bbb-a75e-231e034ea87c-profile-collector-cert\") pod \"catalog-operator-68c6474976-pbshr\" (UID: \"8806fc90-52a1-4bbb-a75e-231e034ea87c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.543262 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e25a5c57-ddfb-4b84-aec6-8a512656f614-proxy-tls\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.543653 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbc78376-08c0-4d09-a627-ca17eec0ceb3-metrics-certs\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.544167 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cc68900-ee14-4825-b4c8-375873200016-apiservice-cert\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.545014 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cffc4072-d68e-43c1-b8f5-eb6058d11aa9-cert\") pod \"ingress-canary-q59cx\" (UID: \"cffc4072-d68e-43c1-b8f5-eb6058d11aa9\") " pod="openshift-ingress-canary/ingress-canary-q59cx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.546868 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/351d10cd-c856-424c-a051-a0b4ffcc26a5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4hwtr\" (UID: \"351d10cd-c856-424c-a051-a0b4ffcc26a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.547273 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/71321d16-a836-4ac1-a8fb-a90f80807174-node-bootstrap-token\") pod \"machine-config-server-wnzzr\" (UID: \"71321d16-a836-4ac1-a8fb-a90f80807174\") " pod="openshift-machine-config-operator/machine-config-server-wnzzr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.547950 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/55e33629-ccfb-4893-a062-dab5d945138f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6v9fx\" (UID: \"55e33629-ccfb-4893-a062-dab5d945138f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.549819 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cc68900-ee14-4825-b4c8-375873200016-webhook-cert\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.551825 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/71321d16-a836-4ac1-a8fb-a90f80807174-certs\") pod \"machine-config-server-wnzzr\" (UID: \"71321d16-a836-4ac1-a8fb-a90f80807174\") " pod="openshift-machine-config-operator/machine-config-server-wnzzr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.552546 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88e8ea65-2a94-43b3-85ec-0a3583cecb40-serving-cert\") pod \"service-ca-operator-777779d784-fddzq\" (UID: \"88e8ea65-2a94-43b3-85ec-0a3583cecb40\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.558537 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cbc78376-08c0-4d09-a627-ca17eec0ceb3-default-certificate\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.559601 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvd7\" (UniqueName: \"kubernetes.io/projected/abaa9d2a-1e13-4aa3-aca6-fd759a41d46f-kube-api-access-txvd7\") pod \"csi-hostpathplugin-6bmqz\" (UID: \"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f\") " pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.586429 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmz76\" (UniqueName: \"kubernetes.io/projected/88e8ea65-2a94-43b3-85ec-0a3583cecb40-kube-api-access-vmz76\") pod \"service-ca-operator-777779d784-fddzq\" (UID: \"88e8ea65-2a94-43b3-85ec-0a3583cecb40\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.610724 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjc8q\" (UniqueName: \"kubernetes.io/projected/3767b512-d183-4e69-9534-d2ad2ad5e1c1-kube-api-access-wjc8q\") pod \"machine-approver-56656f9798-78f9r\" (UID: \"3767b512-d183-4e69-9534-d2ad2ad5e1c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.633965 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4g8c\" (UniqueName: \"kubernetes.io/projected/655c215c-110b-4504-993a-2263e6462e2b-kube-api-access-w4g8c\") pod \"dns-default-qxj7l\" (UID: \"655c215c-110b-4504-993a-2263e6462e2b\") " pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.636667 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:36 crc kubenswrapper[4810]: E0110 06:48:36.637032 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.137017487 +0000 UTC m=+145.752510370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.649382 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7r65\" (UniqueName: \"kubernetes.io/projected/55e33629-ccfb-4893-a062-dab5d945138f-kube-api-access-f7r65\") pod \"cluster-samples-operator-665b6dd947-6v9fx\" (UID: \"55e33629-ccfb-4893-a062-dab5d945138f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.664373 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrbpb\" (UniqueName: \"kubernetes.io/projected/e25a5c57-ddfb-4b84-aec6-8a512656f614-kube-api-access-xrbpb\") pod \"machine-config-operator-74547568cd-dhw78\" (UID: \"e25a5c57-ddfb-4b84-aec6-8a512656f614\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.714373 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sx5g\" (UniqueName: \"kubernetes.io/projected/71321d16-a836-4ac1-a8fb-a90f80807174-kube-api-access-5sx5g\") pod \"machine-config-server-wnzzr\" (UID: \"71321d16-a836-4ac1-a8fb-a90f80807174\") " pod="openshift-machine-config-operator/machine-config-server-wnzzr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.722755 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q9kn\" (UniqueName: \"kubernetes.io/projected/351d10cd-c856-424c-a051-a0b4ffcc26a5-kube-api-access-2q9kn\") pod \"package-server-manager-789f6589d5-4hwtr\" (UID: \"351d10cd-c856-424c-a051-a0b4ffcc26a5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.738053 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: E0110 06:48:36.738430 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.238417477 +0000 UTC m=+145.853910350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.743068 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jksgp\" (UniqueName: \"kubernetes.io/projected/cffc4072-d68e-43c1-b8f5-eb6058d11aa9-kube-api-access-jksgp\") pod \"ingress-canary-q59cx\" (UID: \"cffc4072-d68e-43c1-b8f5-eb6058d11aa9\") " pod="openshift-ingress-canary/ingress-canary-q59cx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.765962 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbgrm\" (UniqueName: \"kubernetes.io/projected/8806fc90-52a1-4bbb-a75e-231e034ea87c-kube-api-access-qbgrm\") pod \"catalog-operator-68c6474976-pbshr\" (UID: \"8806fc90-52a1-4bbb-a75e-231e034ea87c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.783464 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.790012 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.801678 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.803473 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.804603 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lq2s\" (UniqueName: \"kubernetes.io/projected/1cc68900-ee14-4825-b4c8-375873200016-kube-api-access-8lq2s\") pod \"packageserver-d55dfcdfc-wlpm4\" (UID: \"1cc68900-ee14-4825-b4c8-375873200016\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.809783 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.810362 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6dwt\" (UniqueName: \"kubernetes.io/projected/cbc78376-08c0-4d09-a627-ca17eec0ceb3-kube-api-access-s6dwt\") pod \"router-default-5444994796-qhr8m\" (UID: \"cbc78376-08c0-4d09-a627-ca17eec0ceb3\") " pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.817329 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.823789 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.832745 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.837148 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.838614 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:36 crc kubenswrapper[4810]: E0110 06:48:36.838719 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.338698159 +0000 UTC m=+145.954191042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.838802 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: E0110 06:48:36.839152 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.339138461 +0000 UTC m=+145.954631344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.856537 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.865734 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.872491 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-q59cx" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.878965 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wnzzr" Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.939787 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:36 crc kubenswrapper[4810]: E0110 06:48:36.939903 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.439881815 +0000 UTC m=+146.055374698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:36 crc kubenswrapper[4810]: I0110 06:48:36.940108 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:36 crc kubenswrapper[4810]: E0110 06:48:36.940458 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.440440149 +0000 UTC m=+146.055933052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.041890 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.042312 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.542293932 +0000 UTC m=+146.157786815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.143647 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.144183 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.644171917 +0000 UTC m=+146.259664790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.244461 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.244851 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.744836748 +0000 UTC m=+146.360329631 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.345440 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.345716 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.845705536 +0000 UTC m=+146.461198419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.357774 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.359224 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp"] Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.370330 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ca08908_f929_44c1_a0fe_61e028610dd8.slice/crio-8f334c1df6b0a6467e66a9c6af5e6543e6ae088181ed8aa4cba7b138026d42c1 WatchSource:0}: Error finding container 8f334c1df6b0a6467e66a9c6af5e6543e6ae088181ed8aa4cba7b138026d42c1: Status 404 returned error can't find the container with id 8f334c1df6b0a6467e66a9c6af5e6543e6ae088181ed8aa4cba7b138026d42c1 Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.371496 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2"] Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.372759 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5f4931f_9858_4287_9f70_e2bfcb25eabd.slice/crio-003cfa40612d05bd591e8638d854a201425ef8a4d62da742a6c23f4a31e076e2 WatchSource:0}: Error finding container 003cfa40612d05bd591e8638d854a201425ef8a4d62da742a6c23f4a31e076e2: Status 404 returned error can't find the container with id 003cfa40612d05bd591e8638d854a201425ef8a4d62da742a6c23f4a31e076e2 Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.386182 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-mh9w2"] Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.387507 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4556783_6235_4338_b56d_50146a186e0d.slice/crio-0aa8f9e0d79c7b221101de660044c8dce664e993959c9c501b18d7e81b900c9b WatchSource:0}: Error finding container 0aa8f9e0d79c7b221101de660044c8dce664e993959c9c501b18d7e81b900c9b: Status 404 returned error can't find the container with id 0aa8f9e0d79c7b221101de660044c8dce664e993959c9c501b18d7e81b900c9b Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.403644 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ad4afbe_31ce_4f61_866c_390b33da3bbb.slice/crio-9dbefac2edfeb5b23d79d62a7341961659a3dbb0e893a82ebc4cfc65d24f4291 WatchSource:0}: Error finding container 9dbefac2edfeb5b23d79d62a7341961659a3dbb0e893a82ebc4cfc65d24f4291: Status 404 returned error can't find the container with id 9dbefac2edfeb5b23d79d62a7341961659a3dbb0e893a82ebc4cfc65d24f4291 Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.421565 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.422672 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.434249 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.439225 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xw9ln"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.441807 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.446830 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.447309 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:37.947293321 +0000 UTC m=+146.562786204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.448812 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod845a1b6a_e7d4_467d_a835_053708fed54f.slice/crio-a707cdc99d84dc1aa6202156cd1aee4232feb0a5f52077710694d3b595cc512f WatchSource:0}: Error finding container a707cdc99d84dc1aa6202156cd1aee4232feb0a5f52077710694d3b595cc512f: Status 404 returned error can't find the container with id a707cdc99d84dc1aa6202156cd1aee4232feb0a5f52077710694d3b595cc512f Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.467059 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwwrl"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.470775 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.472377 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7vnnq"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.473575 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nxhxd"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.475308 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-9dnp6"] Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.475482 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod931833dc_0102_4ca9_9a92_129d9a97170b.slice/crio-ff928a810409b33b41302f37370eee35b8f43b9ab313e324820916df6ea2ab5a WatchSource:0}: Error finding container ff928a810409b33b41302f37370eee35b8f43b9ab313e324820916df6ea2ab5a: Status 404 returned error can't find the container with id ff928a810409b33b41302f37370eee35b8f43b9ab313e324820916df6ea2ab5a Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.477298 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-km9lv"] Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.478443 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39b51457_06ae_4cf2_8b78_4f34bb908819.slice/crio-0077b49623f036e56963d75ab9115b13ecfd468901f04e484b6a8d146ca5c488 WatchSource:0}: Error finding container 0077b49623f036e56963d75ab9115b13ecfd468901f04e484b6a8d146ca5c488: Status 404 returned error can't find the container with id 0077b49623f036e56963d75ab9115b13ecfd468901f04e484b6a8d146ca5c488 Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.478774 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6"] Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.481149 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196f4a49_7ed6_4346_bbc1_3908eb17eadc.slice/crio-1296a4045e37f236b5f9d0ba87ccef7a23e68a3677b4a44d7aa031bfe523a3c7 WatchSource:0}: Error finding container 1296a4045e37f236b5f9d0ba87ccef7a23e68a3677b4a44d7aa031bfe523a3c7: Status 404 returned error can't find the container with id 1296a4045e37f236b5f9d0ba87ccef7a23e68a3677b4a44d7aa031bfe523a3c7 Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.482486 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fpl4m"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.483705 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hqsj6"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.486998 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fddzq"] Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.499403 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod058be455_c482_4531_9bd8_08b4013e7d3c.slice/crio-b819d8ae235a3b95ede6eac61bb66d623462c35f4d36a2d9bca443092b0e8b01 WatchSource:0}: Error finding container b819d8ae235a3b95ede6eac61bb66d623462c35f4d36a2d9bca443092b0e8b01: Status 404 returned error can't find the container with id b819d8ae235a3b95ede6eac61bb66d623462c35f4d36a2d9bca443092b0e8b01 Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.504817 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.548104 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.548476 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.048456647 +0000 UTC m=+146.663949540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.628667 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" event={"ID":"7c7ec181-e310-4832-af7a-d6a5437e565d","Type":"ContainerStarted","Data":"f8f69d7179caa22c634085bea1f9caa3a9c324d909082a61cbd232413e8e9bff"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.632716 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" event={"ID":"c67349c1-8617-4228-b6e9-009b94caab7a","Type":"ContainerStarted","Data":"01329baaeaebc320f45a13c7a03ca997a216164ba513cbabdf96ab6eaf8e3aaa"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.633981 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" event={"ID":"196f4a49-7ed6-4346-bbc1-3908eb17eadc","Type":"ContainerStarted","Data":"1296a4045e37f236b5f9d0ba87ccef7a23e68a3677b4a44d7aa031bfe523a3c7"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.635180 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" event={"ID":"edd79abd-0e91-4256-9943-f1b08e35b661","Type":"ContainerStarted","Data":"dfbf4836aec356e529dc4369b3fcf399fad458a1f967f32ed547e5c62bdd3dd4"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.636663 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mh9w2" event={"ID":"1ad4afbe-31ce-4f61-866c-390b33da3bbb","Type":"ContainerStarted","Data":"9dbefac2edfeb5b23d79d62a7341961659a3dbb0e893a82ebc4cfc65d24f4291"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.637924 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" event={"ID":"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3","Type":"ContainerStarted","Data":"7a25478fd10d332757e43431221847d013d2b8eeed600acff546dd81aa1e2e13"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.638748 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" event={"ID":"d83df240-de42-4c0d-ba24-ead75566bc23","Type":"ContainerStarted","Data":"0a5f6d65b484dea386e6e88864663ab538eff2461328006d4af7fdde31ffae02"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.640108 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" event={"ID":"39b51457-06ae-4cf2-8b78-4f34bb908819","Type":"ContainerStarted","Data":"0077b49623f036e56963d75ab9115b13ecfd468901f04e484b6a8d146ca5c488"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.641258 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" event={"ID":"e65db429-f69c-4a8a-982b-0566164c6296","Type":"ContainerStarted","Data":"1c634e1e422479e01867bbe54cc85416b0c12ea51702435c5cd18b0149f6acd1"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.642472 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wnzzr" event={"ID":"71321d16-a836-4ac1-a8fb-a90f80807174","Type":"ContainerStarted","Data":"36218377043d54f952157975d1fc83244d3a6650273d01772caa2a3f4e5d8e54"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.642526 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wnzzr" event={"ID":"71321d16-a836-4ac1-a8fb-a90f80807174","Type":"ContainerStarted","Data":"59746b4aceb0e5b574cf827723ff4bcf892a587e4f39c2984ac6f8fb45b0cd74"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.643578 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" event={"ID":"e4556783-6235-4338-b56d-50146a186e0d","Type":"ContainerStarted","Data":"0aa8f9e0d79c7b221101de660044c8dce664e993959c9c501b18d7e81b900c9b"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.647097 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" event={"ID":"88e8ea65-2a94-43b3-85ec-0a3583cecb40","Type":"ContainerStarted","Data":"e3c404707006259fed601906d305072dee95d23ebe278e10aefbbb3025d75cb0"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.651739 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.651922 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.151898441 +0000 UTC m=+146.767391334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.652169 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.652473 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.152460637 +0000 UTC m=+146.767953520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.652995 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qhr8m" event={"ID":"cbc78376-08c0-4d09-a627-ca17eec0ceb3","Type":"ContainerStarted","Data":"55e7906a3ebe3cffac388adc568ff825e1528b13d5f2e2c4ec10dc81c60c0a50"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.653025 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qhr8m" event={"ID":"cbc78376-08c0-4d09-a627-ca17eec0ceb3","Type":"ContainerStarted","Data":"6d8eacafcba1db6e8ffe93a4c00561234bb1896074d3b509108702a9be9f049c"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.653798 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" event={"ID":"4ca08908-f929-44c1-a0fe-61e028610dd8","Type":"ContainerStarted","Data":"8f334c1df6b0a6467e66a9c6af5e6543e6ae088181ed8aa4cba7b138026d42c1"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.654470 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" event={"ID":"7e755f5b-0a8c-4108-a133-8d5955de3641","Type":"ContainerStarted","Data":"b010cbb52459dc185da867550ac8f32e97c41a8abb542e763ff84e33eb94a36c"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.656379 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" event={"ID":"3767b512-d183-4e69-9534-d2ad2ad5e1c1","Type":"ContainerStarted","Data":"89212775394d3fa2b983693224762d08d1ac3083810efcfc7e761ce977a7e024"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.657316 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr" event={"ID":"f5f4931f-9858-4287-9f70-e2bfcb25eabd","Type":"ContainerStarted","Data":"003cfa40612d05bd591e8638d854a201425ef8a4d62da742a6c23f4a31e076e2"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.658302 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-km9lv" event={"ID":"058be455-c482-4531-9bd8-08b4013e7d3c","Type":"ContainerStarted","Data":"b819d8ae235a3b95ede6eac61bb66d623462c35f4d36a2d9bca443092b0e8b01"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.658928 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" event={"ID":"194fadc9-26fd-44d7-84db-14d442ba6dea","Type":"ContainerStarted","Data":"c9e00910b38f05d46e497d6f14918a6f61d8dde7f86746b77dbda162658d2609"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.660502 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" event={"ID":"931833dc-0102-4ca9-9a92-129d9a97170b","Type":"ContainerStarted","Data":"ff928a810409b33b41302f37370eee35b8f43b9ab313e324820916df6ea2ab5a"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.667076 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" event={"ID":"7706c50d-1b6d-4add-b687-3f59c6e080d2","Type":"ContainerStarted","Data":"1790b24cc18191eb8e2217e4db3839ebfffbf888da8b9e293a3c4b7e2ac32074"} Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.680258 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr"] Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.703821 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod351d10cd_c856_424c_a051_a0b4ffcc26a5.slice/crio-6d8207b31c23080d1041d16b5093e65dca238f8f6128b479aa190b90472c4efa WatchSource:0}: Error finding container 6d8207b31c23080d1041d16b5093e65dca238f8f6128b479aa190b90472c4efa: Status 404 returned error can't find the container with id 6d8207b31c23080d1041d16b5093e65dca238f8f6128b479aa190b90472c4efa Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.740858 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf"] Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.753178 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.753313 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.253287753 +0000 UTC m=+146.868780636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.753473 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.753709 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.253699093 +0000 UTC m=+146.869191976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.755993 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm"] Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.792497 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded76ec22_75d0_43c0_b338_5062f922edda.slice/crio-d7edf6b4d0759b63861dba2beb0cc0d96c9bea579c3876adf1ee6970886400df WatchSource:0}: Error finding container d7edf6b4d0759b63861dba2beb0cc0d96c9bea579c3876adf1ee6970886400df: Status 404 returned error can't find the container with id d7edf6b4d0759b63861dba2beb0cc0d96c9bea579c3876adf1ee6970886400df Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.809694 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c3c724b_e4ad_4b31_a6c2_80b82f352ac6.slice/crio-f032abbab44dab3fb0edffa728dc7a4b80e96876b071d24228e19cdb33ecbfa9 WatchSource:0}: Error finding container f032abbab44dab3fb0edffa728dc7a4b80e96876b071d24228e19cdb33ecbfa9: Status 404 returned error can't find the container with id f032abbab44dab3fb0edffa728dc7a4b80e96876b071d24228e19cdb33ecbfa9 Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.853957 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.854181 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.35415713 +0000 UTC m=+146.969650013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.854268 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.854347 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.854765 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.354756925 +0000 UTC m=+146.970249808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.923371 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.955915 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.956074 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.956108 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.956129 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.958346 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pfgf2"] Jan 10 06:48:37 crc kubenswrapper[4810]: E0110 06:48:37.958646 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.458629962 +0000 UTC m=+147.074122845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.964584 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.964679 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:37 crc kubenswrapper[4810]: I0110 06:48:37.965041 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:37 crc kubenswrapper[4810]: W0110 06:48:37.984604 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod755f846b_01e4_435d_b86c_cbe3f917aa31.slice/crio-38a2de2a402d77923f0ab26011b1cc22f2eeb11c56e3b9f4ca1495dfd1da79df WatchSource:0}: Error finding container 38a2de2a402d77923f0ab26011b1cc22f2eeb11c56e3b9f4ca1495dfd1da79df: Status 404 returned error can't find the container with id 38a2de2a402d77923f0ab26011b1cc22f2eeb11c56e3b9f4ca1495dfd1da79df Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.057464 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.057748 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.557736203 +0000 UTC m=+147.173229086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.102370 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p68qs"] Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.110009 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.117816 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.122859 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.148125 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm"] Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.158236 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.158535 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.658521558 +0000 UTC m=+147.274014441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.166601 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gx7wx"] Jan 10 06:48:38 crc kubenswrapper[4810]: W0110 06:48:38.171324 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1adffde5_7978_40ac_8d09_7faff1fae25d.slice/crio-0e454112824887da93fc01da0d96ab92ceb2f3f9e035cec105ce4b49b3dc71a6 WatchSource:0}: Error finding container 0e454112824887da93fc01da0d96ab92ceb2f3f9e035cec105ce4b49b3dc71a6: Status 404 returned error can't find the container with id 0e454112824887da93fc01da0d96ab92ceb2f3f9e035cec105ce4b49b3dc71a6 Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.179014 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4"] Jan 10 06:48:38 crc kubenswrapper[4810]: W0110 06:48:38.197161 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cc68900_ee14_4825_b4c8_375873200016.slice/crio-ff0e0960938d6cc9aa17c72cb1c4ce3585afbbc6125dd2655ab1ab381872c287 WatchSource:0}: Error finding container ff0e0960938d6cc9aa17c72cb1c4ce3585afbbc6125dd2655ab1ab381872c287: Status 404 returned error can't find the container with id ff0e0960938d6cc9aa17c72cb1c4ce3585afbbc6125dd2655ab1ab381872c287 Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.200831 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nrzgt"] Jan 10 06:48:38 crc kubenswrapper[4810]: W0110 06:48:38.208023 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd05445dd_fa70_4239_af6e_56f88234a35b.slice/crio-461cd63eeb18322125d6262c98532ca983430de4ab307b4282efc83e1a68e74b WatchSource:0}: Error finding container 461cd63eeb18322125d6262c98532ca983430de4ab307b4282efc83e1a68e74b: Status 404 returned error can't find the container with id 461cd63eeb18322125d6262c98532ca983430de4ab307b4282efc83e1a68e74b Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.236261 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr"] Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.256267 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t88kw"] Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.260037 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.260340 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.760327 +0000 UTC m=+147.375819883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.277764 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qxj7l"] Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.298737 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d"] Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.314835 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-q59cx"] Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.338632 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-6bmqz"] Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.361529 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.361915 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.861889365 +0000 UTC m=+147.477382248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.362878 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.363338 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.863327143 +0000 UTC m=+147.478820026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.365301 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78"] Jan 10 06:48:38 crc kubenswrapper[4810]: W0110 06:48:38.432757 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf929e8f8_6ccd_4d31_b6e8_afcd6d6d1da0.slice/crio-a604d900c00522255159aad056333649257ff88523d5fc09a60ba5707df44aae WatchSource:0}: Error finding container a604d900c00522255159aad056333649257ff88523d5fc09a60ba5707df44aae: Status 404 returned error can't find the container with id a604d900c00522255159aad056333649257ff88523d5fc09a60ba5707df44aae Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.463546 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.463907 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:38.963889522 +0000 UTC m=+147.579382405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.471129 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j"] Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.489079 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp"] Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.564887 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.565445 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:39.065433788 +0000 UTC m=+147.680926671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: W0110 06:48:38.585286 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49455ed9_e8ab_44c8_9075_2ffbbebe36a8.slice/crio-22529e76fdcd0e43f7b6bcff1bc3389a9f9911ae415a7c006d516bb1732ea208 WatchSource:0}: Error finding container 22529e76fdcd0e43f7b6bcff1bc3389a9f9911ae415a7c006d516bb1732ea208: Status 404 returned error can't find the container with id 22529e76fdcd0e43f7b6bcff1bc3389a9f9911ae415a7c006d516bb1732ea208 Jan 10 06:48:38 crc kubenswrapper[4810]: W0110 06:48:38.638687 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a401b43_55ea_45de_8fdd_d00354841be3.slice/crio-812ae4d46b90c4abbfdcd4baf03e7632c4c10250c9500251ca11a69670d95b01 WatchSource:0}: Error finding container 812ae4d46b90c4abbfdcd4baf03e7632c4c10250c9500251ca11a69670d95b01: Status 404 returned error can't find the container with id 812ae4d46b90c4abbfdcd4baf03e7632c4c10250c9500251ca11a69670d95b01 Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.667646 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.667809 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:39.167782334 +0000 UTC m=+147.783275217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.668914 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.669215 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:39.169186741 +0000 UTC m=+147.784679624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.676671 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" event={"ID":"db211a74-cc64-4820-b576-179f6affa220","Type":"ContainerStarted","Data":"f94118f2bfe0c7d6739abc2d872303fa928fe7179f5bc85d5607d34fa79b2698"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.677846 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qxj7l" event={"ID":"655c215c-110b-4504-993a-2263e6462e2b","Type":"ContainerStarted","Data":"696a8cc8bccfd3bb43b445df06a66ed863019c1c86cfbaef98d4008b4af2b2d6"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.697737 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" event={"ID":"351d10cd-c856-424c-a051-a0b4ffcc26a5","Type":"ContainerStarted","Data":"d3c4db74a4f54127bd7d7b43eaf8b1b25fa6950ddd638b17fb623934036eea48"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.697776 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" event={"ID":"351d10cd-c856-424c-a051-a0b4ffcc26a5","Type":"ContainerStarted","Data":"6d8207b31c23080d1041d16b5093e65dca238f8f6128b479aa190b90472c4efa"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.736483 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" event={"ID":"1adffde5-7978-40ac-8d09-7faff1fae25d","Type":"ContainerStarted","Data":"0e454112824887da93fc01da0d96ab92ceb2f3f9e035cec105ce4b49b3dc71a6"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.753632 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" event={"ID":"4ca08908-f929-44c1-a0fe-61e028610dd8","Type":"ContainerStarted","Data":"866a3707659def6ece6d8e5ed6d96034aa0da3f48352d3245f478b614244be4d"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.753678 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" event={"ID":"4ca08908-f929-44c1-a0fe-61e028610dd8","Type":"ContainerStarted","Data":"972cb49829e8a77b741496b6f1ecb1189b6561d16768a7e4bfe6d5e8309bc9e1"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.756838 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" event={"ID":"9a401b43-55ea-45de-8fdd-d00354841be3","Type":"ContainerStarted","Data":"812ae4d46b90c4abbfdcd4baf03e7632c4c10250c9500251ca11a69670d95b01"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.768347 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" event={"ID":"55e33629-ccfb-4893-a062-dab5d945138f","Type":"ContainerStarted","Data":"7affa972a4daed1c6fee562b60c0915f30c0d894658ace11f72c0b88123c1ba4"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.769367 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.769690 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:39.269675968 +0000 UTC m=+147.885168851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.781571 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-km9lv" event={"ID":"058be455-c482-4531-9bd8-08b4013e7d3c","Type":"ContainerStarted","Data":"9e24fa1ceeed9008ee6d65715a7dc7ea5c77b91ff7891e3afaaef7013a3d2538"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.788565 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-4fltp" podStartSLOduration=127.788548487 podStartE2EDuration="2m7.788548487s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:38.788070425 +0000 UTC m=+147.403563308" watchObservedRunningTime="2026-01-10 06:48:38.788548487 +0000 UTC m=+147.404041370" Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.792733 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" event={"ID":"39b51457-06ae-4cf2-8b78-4f34bb908819","Type":"ContainerStarted","Data":"4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.793290 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.806336 4810 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kt27g container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.806385 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" podUID="39b51457-06ae-4cf2-8b78-4f34bb908819" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.807647 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" event={"ID":"88e8ea65-2a94-43b3-85ec-0a3583cecb40","Type":"ContainerStarted","Data":"12967dae4c1483cf27222b00dcfa6b6a62a96e4741b1aff743d8695124e7fbb6"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.818071 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" podStartSLOduration=127.818056977 podStartE2EDuration="2m7.818056977s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:38.81739257 +0000 UTC m=+147.432885453" watchObservedRunningTime="2026-01-10 06:48:38.818056977 +0000 UTC m=+147.433549860" Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.826020 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" event={"ID":"d24f1d85-f530-4f1c-ae79-db5d7b273d6c","Type":"ContainerStarted","Data":"7185e21c62984b8b3b2444d00c84edc6a83bb41dd3fa5923722f6cc795862902"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.829783 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" event={"ID":"d83df240-de42-4c0d-ba24-ead75566bc23","Type":"ContainerStarted","Data":"201ee27463c0d65acff758f3fa1724418855ec4f276d32b17997d6f2ff800e4b"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.836850 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" event={"ID":"8806fc90-52a1-4bbb-a75e-231e034ea87c","Type":"ContainerStarted","Data":"c4895baf3f6bf2c171ac9a591aed392f7bf46e7ac1ee5def809b0c10b70d3a3a"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.841628 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" event={"ID":"7706c50d-1b6d-4add-b687-3f59c6e080d2","Type":"ContainerStarted","Data":"df63fcd2cc7a077b5a94e18688ca40a8c2e697c2ebba1037f176be7b28201a60"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.844090 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" event={"ID":"e65db429-f69c-4a8a-982b-0566164c6296","Type":"ContainerStarted","Data":"f3ed6b14ed2bcaee3b56c49a278ad66ed44968dfa070fc66a09367acb8c85062"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.847678 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" event={"ID":"d05445dd-fa70-4239-af6e-56f88234a35b","Type":"ContainerStarted","Data":"461cd63eeb18322125d6262c98532ca983430de4ab307b4282efc83e1a68e74b"} Jan 10 06:48:38 crc kubenswrapper[4810]: W0110 06:48:38.854440 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-cee26f44ebd82494de92ee8a1038c3441693700678419c0774b86a92702a8d19 WatchSource:0}: Error finding container cee26f44ebd82494de92ee8a1038c3441693700678419c0774b86a92702a8d19: Status 404 returned error can't find the container with id cee26f44ebd82494de92ee8a1038c3441693700678419c0774b86a92702a8d19 Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.857727 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fddzq" podStartSLOduration=127.857713407 podStartE2EDuration="2m7.857713407s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:38.841260241 +0000 UTC m=+147.456753124" watchObservedRunningTime="2026-01-10 06:48:38.857713407 +0000 UTC m=+147.473206280" Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.870928 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.871952 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:39.371941323 +0000 UTC m=+147.987434206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.887917 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x9n66" podStartSLOduration=127.887901294 podStartE2EDuration="2m7.887901294s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:38.858936968 +0000 UTC m=+147.474429851" watchObservedRunningTime="2026-01-10 06:48:38.887901294 +0000 UTC m=+147.503394177" Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.888590 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" event={"ID":"e4556783-6235-4338-b56d-50146a186e0d","Type":"ContainerStarted","Data":"d6791a34f9d585b13bf76d7dfd38e3d2c3f410e91fd677143c8e37ae0f4c16d6"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.888882 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-nxhxd" podStartSLOduration=128.88887576 podStartE2EDuration="2m8.88887576s" podCreationTimestamp="2026-01-10 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:38.8809264 +0000 UTC m=+147.496419283" watchObservedRunningTime="2026-01-10 06:48:38.88887576 +0000 UTC m=+147.504368643" Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.907338 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hqwc6" podStartSLOduration=127.907325128 podStartE2EDuration="2m7.907325128s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:38.90662536 +0000 UTC m=+147.522118233" watchObservedRunningTime="2026-01-10 06:48:38.907325128 +0000 UTC m=+147.522818011" Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.920164 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" event={"ID":"e25a5c57-ddfb-4b84-aec6-8a512656f614","Type":"ContainerStarted","Data":"773ea85a8e5fb01c5115bda98900114de3be3f1446d365726153c09460e4f1c7"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.923799 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q59cx" event={"ID":"cffc4072-d68e-43c1-b8f5-eb6058d11aa9","Type":"ContainerStarted","Data":"5e7d439f0b9934c8fbb3ede3a09c5faacd3f07fbfd180c55ea5b4281405344f9"} Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.972297 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:38 crc kubenswrapper[4810]: E0110 06:48:38.973732 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:39.473716663 +0000 UTC m=+148.089209546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:38 crc kubenswrapper[4810]: I0110 06:48:38.988011 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" event={"ID":"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f","Type":"ContainerStarted","Data":"d742c37d81813d79f503049c9631a765aef7c8464943c140e7f5749bfe94516e"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.077853 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.078429 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:39.578417632 +0000 UTC m=+148.193910515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.080717 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" event={"ID":"755f846b-01e4-435d-b86c-cbe3f917aa31","Type":"ContainerStarted","Data":"12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.080785 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" event={"ID":"755f846b-01e4-435d-b86c-cbe3f917aa31","Type":"ContainerStarted","Data":"38a2de2a402d77923f0ab26011b1cc22f2eeb11c56e3b9f4ca1495dfd1da79df"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.094969 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-sssf2" podStartSLOduration=128.094947199 podStartE2EDuration="2m8.094947199s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:38.939988011 +0000 UTC m=+147.555480894" watchObservedRunningTime="2026-01-10 06:48:39.094947199 +0000 UTC m=+147.710440082" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.110371 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr" event={"ID":"f5f4931f-9858-4287-9f70-e2bfcb25eabd","Type":"ContainerStarted","Data":"751fa40b3a29d2e7512fdbf4ac832c42ea1607db0f5e0a7d4edc6d417e1c4f6d"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.145453 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" event={"ID":"ed76ec22-75d0-43c0-b338-5062f922edda","Type":"ContainerStarted","Data":"d7edf6b4d0759b63861dba2beb0cc0d96c9bea579c3876adf1ee6970886400df"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.178647 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.178947 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:39.67893295 +0000 UTC m=+148.294425833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.190430 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" event={"ID":"3767b512-d183-4e69-9534-d2ad2ad5e1c1","Type":"ContainerStarted","Data":"bd00611348aa4212f0ca0c39ff2a1c0ea3ce0c8a98d36ab0fdcbffda79207e0b"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.206898 4810 generic.go:334] "Generic (PLEG): container finished" podID="c67349c1-8617-4228-b6e9-009b94caab7a" containerID="38671daa6f5735cceabeb7b4da91bc52e235d650703e42f3448ab9cc85339bb5" exitCode=0 Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.206959 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" event={"ID":"c67349c1-8617-4228-b6e9-009b94caab7a","Type":"ContainerDied","Data":"38671daa6f5735cceabeb7b4da91bc52e235d650703e42f3448ab9cc85339bb5"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.254075 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp" event={"ID":"49455ed9-e8ab-44c8-9075-2ffbbebe36a8","Type":"ContainerStarted","Data":"22529e76fdcd0e43f7b6bcff1bc3389a9f9911ae415a7c006d516bb1732ea208"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.270807 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" event={"ID":"1cc68900-ee14-4825-b4c8-375873200016","Type":"ContainerStarted","Data":"ff0e0960938d6cc9aa17c72cb1c4ce3585afbbc6125dd2655ab1ab381872c287"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.272607 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xw9ln" event={"ID":"845a1b6a-e7d4-467d-a835-053708fed54f","Type":"ContainerStarted","Data":"d20c4bf5bb193b0594e69c0024d17b17ddbfbb519a08c775998ab398b02020d1"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.272633 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xw9ln" event={"ID":"845a1b6a-e7d4-467d-a835-053708fed54f","Type":"ContainerStarted","Data":"a707cdc99d84dc1aa6202156cd1aee4232feb0a5f52077710694d3b595cc512f"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.273037 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xw9ln" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.280439 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.281038 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" event={"ID":"196f4a49-7ed6-4346-bbc1-3908eb17eadc","Type":"ContainerStarted","Data":"67863a6a089406619234a7c493155f8f31c38c20502770d81a39f68124728259"} Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.284005 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:39.783988408 +0000 UTC m=+148.399481291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.286793 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xw9ln" podStartSLOduration=128.286778591 podStartE2EDuration="2m8.286778591s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:39.285308813 +0000 UTC m=+147.900801696" watchObservedRunningTime="2026-01-10 06:48:39.286778591 +0000 UTC m=+147.902271474" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.291484 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-xw9ln container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.291669 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xw9ln" podUID="845a1b6a-e7d4-467d-a835-053708fed54f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.295385 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" event={"ID":"f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0","Type":"ContainerStarted","Data":"a604d900c00522255159aad056333649257ff88523d5fc09a60ba5707df44aae"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.296609 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" event={"ID":"931833dc-0102-4ca9-9a92-129d9a97170b","Type":"ContainerStarted","Data":"ee966138df8c38c1015e178559eae858de2ee82fcac0015221e2f7f3aba4a497"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.300737 4810 generic.go:334] "Generic (PLEG): container finished" podID="7e755f5b-0a8c-4108-a133-8d5955de3641" containerID="1bec497d5a6a811ba00903e2266df8ab2dc88a63a279d0cb56073ff2a5ca59bf" exitCode=0 Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.300794 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" event={"ID":"7e755f5b-0a8c-4108-a133-8d5955de3641","Type":"ContainerDied","Data":"1bec497d5a6a811ba00903e2266df8ab2dc88a63a279d0cb56073ff2a5ca59bf"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.301755 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" event={"ID":"85c62a01-c818-47b4-92fd-bcd87d8218a8","Type":"ContainerStarted","Data":"ee888404bc40a558d18572ee60c5e84c544f5171d6628392e06ef024cfe9c62b"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.302557 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" event={"ID":"edd79abd-0e91-4256-9943-f1b08e35b661","Type":"ContainerStarted","Data":"1dde298e2a80f78992ef7c5ae8554f08f1d4541e76429b671df957ca03712340"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.303624 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" event={"ID":"194fadc9-26fd-44d7-84db-14d442ba6dea","Type":"ContainerStarted","Data":"35bd9eb2171c9dfcf4603a17438e6394a0e06bd99ac242ae663ca2db88b79fb3"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.304700 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-mh9w2" event={"ID":"1ad4afbe-31ce-4f61-866c-390b33da3bbb","Type":"ContainerStarted","Data":"6051d82e1abd5baecc7fb2711b7b2e336e8c5fc6355d2017603b542e6576b7e4"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.308335 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7vnnq" podStartSLOduration=128.308322981 podStartE2EDuration="2m8.308322981s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:39.307961751 +0000 UTC m=+147.923454634" watchObservedRunningTime="2026-01-10 06:48:39.308322981 +0000 UTC m=+147.923815864" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.311868 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" event={"ID":"7c3c724b-e4ad-4b31-a6c2-80b82f352ac6","Type":"ContainerStarted","Data":"ea3e9b1040e76834e9f68d397e902b13294f82b3a1da06998acbc2d67c6fc527"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.311901 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" event={"ID":"7c3c724b-e4ad-4b31-a6c2-80b82f352ac6","Type":"ContainerStarted","Data":"f032abbab44dab3fb0edffa728dc7a4b80e96876b071d24228e19cdb33ecbfa9"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.312679 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.314262 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" event={"ID":"7c7ec181-e310-4832-af7a-d6a5437e565d","Type":"ContainerStarted","Data":"9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d"} Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.314285 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.314595 4810 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-d85nm container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.314622 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" podUID="7c3c724b-e4ad-4b31-a6c2-80b82f352ac6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.324260 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.353582 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-mh9w2" podStartSLOduration=128.353564478 podStartE2EDuration="2m8.353564478s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:39.352744606 +0000 UTC m=+147.968237489" watchObservedRunningTime="2026-01-10 06:48:39.353564478 +0000 UTC m=+147.969057361" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.372291 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zk2r2" podStartSLOduration=129.372274432 podStartE2EDuration="2m9.372274432s" podCreationTimestamp="2026-01-10 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:39.37105351 +0000 UTC m=+147.986546393" watchObservedRunningTime="2026-01-10 06:48:39.372274432 +0000 UTC m=+147.987767315" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.381325 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.382227 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:39.882212145 +0000 UTC m=+148.497705038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.391498 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-pr5qp" podStartSLOduration=128.39148456 podStartE2EDuration="2m8.39148456s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:39.387534556 +0000 UTC m=+148.003027439" watchObservedRunningTime="2026-01-10 06:48:39.39148456 +0000 UTC m=+148.006977443" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.407392 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" podStartSLOduration=128.40737962 podStartE2EDuration="2m8.40737962s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:39.405398908 +0000 UTC m=+148.020891791" watchObservedRunningTime="2026-01-10 06:48:39.40737962 +0000 UTC m=+148.022872493" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.454936 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" podStartSLOduration=128.454917887 podStartE2EDuration="2m8.454917887s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:39.430398819 +0000 UTC m=+148.045891702" watchObservedRunningTime="2026-01-10 06:48:39.454917887 +0000 UTC m=+148.070410770" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.456469 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wnzzr" podStartSLOduration=6.456459918 podStartE2EDuration="6.456459918s" podCreationTimestamp="2026-01-10 06:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:39.455846022 +0000 UTC m=+148.071338905" watchObservedRunningTime="2026-01-10 06:48:39.456459918 +0000 UTC m=+148.071952801" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.471785 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qhr8m" podStartSLOduration=128.471768393 podStartE2EDuration="2m8.471768393s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:39.471370703 +0000 UTC m=+148.086863596" watchObservedRunningTime="2026-01-10 06:48:39.471768393 +0000 UTC m=+148.087261276" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.482863 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.483187 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:39.983174164 +0000 UTC m=+148.598667047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.583830 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.584152 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.084128514 +0000 UTC m=+148.699621397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.584236 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.584494 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.084483684 +0000 UTC m=+148.699976567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.685320 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.685556 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.185528875 +0000 UTC m=+148.801021758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.685727 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.686082 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.186066499 +0000 UTC m=+148.801559382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.787132 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.787416 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.287391679 +0000 UTC m=+148.902884562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.788564 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.788855 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.288847438 +0000 UTC m=+148.904340321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.821644 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.830609 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:39 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:39 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:39 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.830844 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.892624 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.892911 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.392898599 +0000 UTC m=+149.008391482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:39 crc kubenswrapper[4810]: I0110 06:48:39.995832 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:39 crc kubenswrapper[4810]: E0110 06:48:39.996120 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.496108518 +0000 UTC m=+149.111601401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.098216 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:40 crc kubenswrapper[4810]: E0110 06:48:40.098519 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.598505645 +0000 UTC m=+149.213998518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.199401 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:40 crc kubenswrapper[4810]: E0110 06:48:40.199728 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.699712321 +0000 UTC m=+149.315205204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.305041 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:40 crc kubenswrapper[4810]: E0110 06:48:40.305359 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.805345255 +0000 UTC m=+149.420838138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.342048 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" event={"ID":"351d10cd-c856-424c-a051-a0b4ffcc26a5","Type":"ContainerStarted","Data":"0bfcb88d3e180e34a17e5477ccbc2a0f915e2f487d8116ee49eee5d7036d4cff"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.342421 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.351704 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a20157afb0f5e71d8c4051745064d7da882041b96a928de2f2d317c50839de71"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.351743 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cee26f44ebd82494de92ee8a1038c3441693700678419c0774b86a92702a8d19"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.371401 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" event={"ID":"e25a5c57-ddfb-4b84-aec6-8a512656f614","Type":"ContainerStarted","Data":"92e6db8b83407b6fe817d75f4f23dfd791c08e501284e2f701ea5edddffbb0c6"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.372001 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" podStartSLOduration=129.371987647 podStartE2EDuration="2m9.371987647s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.371522425 +0000 UTC m=+148.987015308" watchObservedRunningTime="2026-01-10 06:48:40.371987647 +0000 UTC m=+148.987480530" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.379742 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" event={"ID":"1adffde5-7978-40ac-8d09-7faff1fae25d","Type":"ContainerStarted","Data":"5d7e07f7bd61974618163189dc1369cc3a07b2a007ddc61223aaf44b084426cd"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.404717 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" podStartSLOduration=129.404703151 podStartE2EDuration="2m9.404703151s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.391536574 +0000 UTC m=+149.007029457" watchObservedRunningTime="2026-01-10 06:48:40.404703151 +0000 UTC m=+149.020196034" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.404926 4810 generic.go:334] "Generic (PLEG): container finished" podID="db211a74-cc64-4820-b576-179f6affa220" containerID="573725b63dc5a068361375c67d000236bfd6cb3236275491c25246a72b6b5237" exitCode=0 Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.405894 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" event={"ID":"db211a74-cc64-4820-b576-179f6affa220","Type":"ContainerDied","Data":"573725b63dc5a068361375c67d000236bfd6cb3236275491c25246a72b6b5237"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.405910 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:40 crc kubenswrapper[4810]: E0110 06:48:40.406147 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:40.906138289 +0000 UTC m=+149.521631172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.423481 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" event={"ID":"8806fc90-52a1-4bbb-a75e-231e034ea87c","Type":"ContainerStarted","Data":"f5a964b1c349d5fcfb4f407b869b5594a8520ab353eca1dbcddc71792d149486"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.424419 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.426108 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mbcrm" podStartSLOduration=129.426093178 podStartE2EDuration="2m9.426093178s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.423486029 +0000 UTC m=+149.038978912" watchObservedRunningTime="2026-01-10 06:48:40.426093178 +0000 UTC m=+149.041586061" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.433173 4810 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-pbshr container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.433256 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" podUID="8806fc90-52a1-4bbb-a75e-231e034ea87c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.433291 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qxj7l" event={"ID":"655c215c-110b-4504-993a-2263e6462e2b","Type":"ContainerStarted","Data":"5ac7f1fcdcbe8ac64e0531774595f8df92aa2df4c605b8bf80eeab3931990011"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.445875 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" event={"ID":"d05445dd-fa70-4239-af6e-56f88234a35b","Type":"ContainerStarted","Data":"22566abca16bb3eb6a5299acb0b73f21cfd0ce1ca090e7068ad777b1c2c37147"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.446643 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.458415 4810 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gx7wx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.458481 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" podUID="d05445dd-fa70-4239-af6e-56f88234a35b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.508848 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:40 crc kubenswrapper[4810]: E0110 06:48:40.510088 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.010061658 +0000 UTC m=+149.625554591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.511682 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" event={"ID":"85c62a01-c818-47b4-92fd-bcd87d8218a8","Type":"ContainerStarted","Data":"56e01bca906ee10dd0e111986c291166401aa816cce8ed18e1cf984d9ec62c76"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.568641 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp" event={"ID":"49455ed9-e8ab-44c8-9075-2ffbbebe36a8","Type":"ContainerStarted","Data":"0cc8c2a18ba5f2b9888fb691b3920bf027d7d39166181377f8583647dcb6a8af"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.611306 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" podStartSLOduration=129.611290105 podStartE2EDuration="2m9.611290105s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.55249171 +0000 UTC m=+149.167984593" watchObservedRunningTime="2026-01-10 06:48:40.611290105 +0000 UTC m=+149.226782978" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.611865 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:40 crc kubenswrapper[4810]: E0110 06:48:40.612185 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.112173818 +0000 UTC m=+149.727666691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.612813 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" podStartSLOduration=130.612804965 podStartE2EDuration="2m10.612804965s" podCreationTimestamp="2026-01-10 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.607637588 +0000 UTC m=+149.223130471" watchObservedRunningTime="2026-01-10 06:48:40.612804965 +0000 UTC m=+149.228297848" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.619108 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" event={"ID":"55e33629-ccfb-4893-a062-dab5d945138f","Type":"ContainerStarted","Data":"f8f06f81fc7c5fe7cc07cd1210f512ff0d96ae848ca0f8f46bade28f87a7654f"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.619149 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" event={"ID":"55e33629-ccfb-4893-a062-dab5d945138f","Type":"ContainerStarted","Data":"1e152bd8a7f74091a78845ea889b4fb485e6984d83c9e2b5007550d30a0fcc80"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.638371 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr" event={"ID":"f5f4931f-9858-4287-9f70-e2bfcb25eabd","Type":"ContainerStarted","Data":"dde90f2f8637a6b8916a1fd2ed5acc6058c4aaa2044ca5f137239c20c543ff09"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.650745 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-r9mqp" podStartSLOduration=129.650731137 podStartE2EDuration="2m9.650731137s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.649012792 +0000 UTC m=+149.264505665" watchObservedRunningTime="2026-01-10 06:48:40.650731137 +0000 UTC m=+149.266224020" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.660791 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" event={"ID":"f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0","Type":"ContainerStarted","Data":"e0303fcfb12bddd6d880ad4bf7ed6a228c9d150f20f329cfd77331db3091a447"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.701022 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" event={"ID":"9a401b43-55ea-45de-8fdd-d00354841be3","Type":"ContainerStarted","Data":"1756c18ddfa2cdb160cde989c51006df888de381432fe17aafa1e1162bf9f20f"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.706565 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6v9fx" podStartSLOduration=130.706550793 podStartE2EDuration="2m10.706550793s" podCreationTimestamp="2026-01-10 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.706323797 +0000 UTC m=+149.321816680" watchObservedRunningTime="2026-01-10 06:48:40.706550793 +0000 UTC m=+149.322043676" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.712768 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:40 crc kubenswrapper[4810]: E0110 06:48:40.712920 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.212897142 +0000 UTC m=+149.828390025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.715017 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:40 crc kubenswrapper[4810]: E0110 06:48:40.715280 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.215269074 +0000 UTC m=+149.830761957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.740227 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-kfptr" podStartSLOduration=129.740211373 podStartE2EDuration="2m9.740211373s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.739712111 +0000 UTC m=+149.355204984" watchObservedRunningTime="2026-01-10 06:48:40.740211373 +0000 UTC m=+149.355704256" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.750725 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" event={"ID":"ed76ec22-75d0-43c0-b338-5062f922edda","Type":"ContainerStarted","Data":"888d79cb35849649e0f1d594c92c4aeb7653817976e39aa553cf805fb96a498b"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.791873 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" event={"ID":"3767b512-d183-4e69-9534-d2ad2ad5e1c1","Type":"ContainerStarted","Data":"0e0c01f33a21ee77306daf6a303cfaf8f22b5f49d7881686ba184fa1b004434d"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.807117 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" podStartSLOduration=129.807100772 podStartE2EDuration="2m9.807100772s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.773491124 +0000 UTC m=+149.388984007" watchObservedRunningTime="2026-01-10 06:48:40.807100772 +0000 UTC m=+149.422593655" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.807604 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" podStartSLOduration=129.807597445 podStartE2EDuration="2m9.807597445s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.806082965 +0000 UTC m=+149.421575848" watchObservedRunningTime="2026-01-10 06:48:40.807597445 +0000 UTC m=+149.423090328" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.818689 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:40 crc kubenswrapper[4810]: E0110 06:48:40.819471 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.319447119 +0000 UTC m=+149.934940002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.820314 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" event={"ID":"196f4a49-7ed6-4346-bbc1-3908eb17eadc","Type":"ContainerStarted","Data":"40fa5a4336f6dd34c63c6420770c01f0c449cfb3fc696f1b2f9f2347f3713aee"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.844055 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:40 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:40 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:40 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.844106 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.853658 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.855464 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vhcxf" podStartSLOduration=129.85545461 podStartE2EDuration="2m9.85545461s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.85394155 +0000 UTC m=+149.469434433" watchObservedRunningTime="2026-01-10 06:48:40.85545461 +0000 UTC m=+149.470947493" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.872711 4810 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-wf9f4 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.872767 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" podUID="c67349c1-8617-4228-b6e9-009b94caab7a" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.901445 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" event={"ID":"1cc68900-ee14-4825-b4c8-375873200016","Type":"ContainerStarted","Data":"3e491cbb082056aec32e2e585ff1df5e5fe7fd2c8d873dcdc066425f3c2a16fd"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.902191 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.923233 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:40 crc kubenswrapper[4810]: E0110 06:48:40.923490 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.42347961 +0000 UTC m=+150.038972493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.938596 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" event={"ID":"d24f1d85-f530-4f1c-ae79-db5d7b273d6c","Type":"ContainerStarted","Data":"3392bfb749ed91d9818b7bcd32c553a668584f7a7ae273a619a95c497291e5b6"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.985401 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8750abbc317e461670c92b9fa9352e7444119dd5916baac83cc06ce78f46b770"} Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.985945 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:48:40 crc kubenswrapper[4810]: I0110 06:48:40.993120 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-78f9r" podStartSLOduration=131.993108891 podStartE2EDuration="2m11.993108891s" podCreationTimestamp="2026-01-10 06:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.94882751 +0000 UTC m=+149.564320393" watchObservedRunningTime="2026-01-10 06:48:40.993108891 +0000 UTC m=+149.608601764" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.031609 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:41 crc kubenswrapper[4810]: E0110 06:48:41.032213 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.532182554 +0000 UTC m=+150.147675427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.032848 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" podStartSLOduration=130.032815281 podStartE2EDuration="2m10.032815281s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:41.027570292 +0000 UTC m=+149.643063175" watchObservedRunningTime="2026-01-10 06:48:41.032815281 +0000 UTC m=+149.648308164" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.033153 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-q59cx" event={"ID":"cffc4072-d68e-43c1-b8f5-eb6058d11aa9","Type":"ContainerStarted","Data":"85ce2d9a1a01ec7a8e2fddd72e65794ed9febb6aa1774606b36f6741e947fde6"} Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.033843 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-hqsj6" podStartSLOduration=130.033837888 podStartE2EDuration="2m10.033837888s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:40.993923782 +0000 UTC m=+149.609416665" watchObservedRunningTime="2026-01-10 06:48:41.033837888 +0000 UTC m=+149.649330771" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.044517 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4bdb00452023390375ea559007575da8aa76a9b3c55fd12a42c61d2471032cba"} Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.046327 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" podStartSLOduration=131.046308258 podStartE2EDuration="2m11.046308258s" podCreationTimestamp="2026-01-10 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:41.045936867 +0000 UTC m=+149.661429770" watchObservedRunningTime="2026-01-10 06:48:41.046308258 +0000 UTC m=+149.661801141" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.067293 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" event={"ID":"9da8b1a0-cfbe-4a73-a760-6a3679c58ba3","Type":"ContainerStarted","Data":"79467a7fba1642e6686fbc4cc6810533587ca956942156f8db1329cb6a1192fb"} Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.069808 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-xw9ln container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.069840 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xw9ln" podUID="845a1b6a-e7d4-467d-a835-053708fed54f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.070602 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.070647 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.077514 4810 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pfgf2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.077558 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" podUID="755f846b-01e4-435d-b86c-cbe3f917aa31" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.082315 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.134941 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:41 crc kubenswrapper[4810]: E0110 06:48:41.136175 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.636161913 +0000 UTC m=+150.251654796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.150505 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-d85nm" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.235904 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:41 crc kubenswrapper[4810]: E0110 06:48:41.236061 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.736036984 +0000 UTC m=+150.351529867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.236637 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:41 crc kubenswrapper[4810]: E0110 06:48:41.238070 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.738061158 +0000 UTC m=+150.353554031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.277129 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" podStartSLOduration=130.277114771 podStartE2EDuration="2m10.277114771s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:41.187424829 +0000 UTC m=+149.802917712" watchObservedRunningTime="2026-01-10 06:48:41.277114771 +0000 UTC m=+149.892607654" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.278636 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-fpl4m" podStartSLOduration=130.278630541 podStartE2EDuration="2m10.278630541s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:41.276128724 +0000 UTC m=+149.891621607" watchObservedRunningTime="2026-01-10 06:48:41.278630541 +0000 UTC m=+149.894123424" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.337464 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:41 crc kubenswrapper[4810]: E0110 06:48:41.337736 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.837718323 +0000 UTC m=+150.453211206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.428158 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" podStartSLOduration=130.428141823 podStartE2EDuration="2m10.428141823s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:41.3678718 +0000 UTC m=+149.983364683" watchObservedRunningTime="2026-01-10 06:48:41.428141823 +0000 UTC m=+150.043634706" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.439854 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:41 crc kubenswrapper[4810]: E0110 06:48:41.440194 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:41.940179692 +0000 UTC m=+150.555672575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.540683 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:41 crc kubenswrapper[4810]: E0110 06:48:41.541085 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.04107071 +0000 UTC m=+150.656563593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.561576 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-km9lv" podStartSLOduration=130.561559482 podStartE2EDuration="2m10.561559482s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:41.559579279 +0000 UTC m=+150.175072162" watchObservedRunningTime="2026-01-10 06:48:41.561559482 +0000 UTC m=+150.177052365" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.641836 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:41 crc kubenswrapper[4810]: E0110 06:48:41.642074 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.14206452 +0000 UTC m=+150.757557403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.734469 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-q59cx" podStartSLOduration=8.734449903 podStartE2EDuration="8.734449903s" podCreationTimestamp="2026-01-10 06:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:41.690873461 +0000 UTC m=+150.306366344" watchObservedRunningTime="2026-01-10 06:48:41.734449903 +0000 UTC m=+150.349942786" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.743770 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:41 crc kubenswrapper[4810]: E0110 06:48:41.744258 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.244242412 +0000 UTC m=+150.859735295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.823341 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:41 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:41 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:41 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.823387 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.845131 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:41 crc kubenswrapper[4810]: E0110 06:48:41.845450 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.345438537 +0000 UTC m=+150.960931420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.902454 4810 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-wlpm4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.902523 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" podUID="1cc68900-ee14-4825-b4c8-375873200016" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 10 06:48:41 crc kubenswrapper[4810]: I0110 06:48:41.946231 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:41 crc kubenswrapper[4810]: E0110 06:48:41.946577 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.446558702 +0000 UTC m=+151.062051585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.047514 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.047781 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.547770198 +0000 UTC m=+151.163263081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.068898 4810 patch_prober.go:28] interesting pod/console-operator-58897d9998-km9lv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.068950 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-km9lv" podUID="058be455-c482-4531-9bd8-08b4013e7d3c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.072496 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1d772873bd385eaba7c2455d3c81b21d4ef81b29386dca9827c8a6645ac460ee"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.074463 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" event={"ID":"db211a74-cc64-4820-b576-179f6affa220","Type":"ContainerStarted","Data":"15577402d883a6b150e7dfd973cc4e2ecbcc289297b7dd0cbf7f36e786f832a9"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.074989 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.076339 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" event={"ID":"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f","Type":"ContainerStarted","Data":"45c7df65c1c598ea590bd0a5d84d69e9587ee7450ce822dfe3f1dc997a8d95d1"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.077686 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qxj7l" event={"ID":"655c215c-110b-4504-993a-2263e6462e2b","Type":"ContainerStarted","Data":"7a021a5e147041ce36a8e3f97e330d0f6e690b70374ba27716618afc40660226"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.078037 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.079216 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g5s7j" event={"ID":"9a401b43-55ea-45de-8fdd-d00354841be3","Type":"ContainerStarted","Data":"4c6e82e7b6119c087df7d18a90309cc81d784f8fbe02c0c9557d7933f85a0a7a"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.080573 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nrzgt" event={"ID":"f929e8f8-6ccd-4d31-b6e8-afcd6d6d1da0","Type":"ContainerStarted","Data":"01c63be13dbaaee1dd4d1a3785e905c8ad53b9ed1f386bb99581bd352cb49e0b"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.082227 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" event={"ID":"7e755f5b-0a8c-4108-a133-8d5955de3641","Type":"ContainerStarted","Data":"6db28e411c5ccb5e7f59cb791e35dcf1939eaeb5461b542be26a623933303696"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.082251 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" event={"ID":"7e755f5b-0a8c-4108-a133-8d5955de3641","Type":"ContainerStarted","Data":"9a58923cd1fb4d5ccf91c4e5021638d7ac5ac2382c4db1724d7c0ce5f167b2d1"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.083652 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ac10f19c233796fcd6c2020024de7fb2d7b1c2058a809afc3b53ca599c61bc7c"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.085332 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" event={"ID":"c67349c1-8617-4228-b6e9-009b94caab7a","Type":"ContainerStarted","Data":"8e9c25ead7bb708ae7b23fc9d3fd4d52cc84be09f34605fa15962cd1537622dd"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.086886 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" event={"ID":"85c62a01-c818-47b4-92fd-bcd87d8218a8","Type":"ContainerStarted","Data":"b0769cd685d51ffee1f887df5c6ba04c144b450734eab0faa9af739e55a8e696"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.089068 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-dhw78" event={"ID":"e25a5c57-ddfb-4b84-aec6-8a512656f614","Type":"ContainerStarted","Data":"6260d2a7d4c6aa55098c11bbf419bc47084ade39ec34cb755b4cdc715bc67cd8"} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.096025 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.096631 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-km9lv" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.116750 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-pbshr" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.135798 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" podStartSLOduration=131.135781865 podStartE2EDuration="2m11.135781865s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:42.120368528 +0000 UTC m=+150.735861421" watchObservedRunningTime="2026-01-10 06:48:42.135781865 +0000 UTC m=+150.751274738" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.148614 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.150351 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.65033574 +0000 UTC m=+151.265828623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.250828 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.251271 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.751259728 +0000 UTC m=+151.366752611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.265081 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-t88kw" podStartSLOduration=131.265067064 podStartE2EDuration="2m11.265067064s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:42.215843432 +0000 UTC m=+150.831336315" watchObservedRunningTime="2026-01-10 06:48:42.265067064 +0000 UTC m=+150.880559947" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.352750 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.352848 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.852833435 +0000 UTC m=+151.468326318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.353064 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.353341 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.853333258 +0000 UTC m=+151.468826141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.430595 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qxj7l" podStartSLOduration=9.43057821 podStartE2EDuration="9.43057821s" podCreationTimestamp="2026-01-10 06:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:42.423899434 +0000 UTC m=+151.039392317" watchObservedRunningTime="2026-01-10 06:48:42.43057821 +0000 UTC m=+151.046071093" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.454877 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.455010 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.954993196 +0000 UTC m=+151.570486079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.455090 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.455337 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:42.955329965 +0000 UTC m=+151.570822848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.509707 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.535658 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" podStartSLOduration=132.535641748 podStartE2EDuration="2m12.535641748s" podCreationTimestamp="2026-01-10 06:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:42.531769056 +0000 UTC m=+151.147261939" watchObservedRunningTime="2026-01-10 06:48:42.535641748 +0000 UTC m=+151.151134621" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.556287 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.556576 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:43.056561541 +0000 UTC m=+151.672054424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.657312 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.657632 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:43.157617174 +0000 UTC m=+151.773110057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.760614 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.761256 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:43.261240344 +0000 UTC m=+151.876733227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.766538 4810 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.828228 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:42 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:42 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:42 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.828277 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.861836 4810 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-10T06:48:42.766573084Z","Handler":null,"Name":""} Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.862454 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.862788 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-10 06:48:43.362775219 +0000 UTC m=+151.978268092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nqtg7" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.887089 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-wlpm4" Jan 10 06:48:42 crc kubenswrapper[4810]: I0110 06:48:42.963964 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:42 crc kubenswrapper[4810]: E0110 06:48:42.964253 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-10 06:48:43.464237082 +0000 UTC m=+152.079729965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:42.996688 4810 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:42.996741 4810 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.065680 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.068765 4810 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.068806 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.094341 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" event={"ID":"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f","Type":"ContainerStarted","Data":"e804006f5cc5a925d5c2a9454fa04c069e53a6d1fd2e6f9c289d43c69a2532cf"} Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.127335 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nqtg7\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.134139 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k49l4"] Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.134947 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.142515 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.153543 4810 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-p68qs container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 10 06:48:43 crc kubenswrapper[4810]: [+]log ok Jan 10 06:48:43 crc kubenswrapper[4810]: [-]poststarthook/max-in-flight-filter failed: reason withheld Jan 10 06:48:43 crc kubenswrapper[4810]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 10 06:48:43 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.153606 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" podUID="db211a74-cc64-4820-b576-179f6affa220" containerName="openshift-config-operator" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.167765 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.228084 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k49l4"] Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.274448 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.275155 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-utilities\") pod \"community-operators-k49l4\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.275219 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6d69\" (UniqueName: \"kubernetes.io/projected/505c4d65-7e70-43ba-ae57-4660f944e1dc-kube-api-access-r6d69\") pod \"community-operators-k49l4\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.275251 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-catalog-content\") pod \"community-operators-k49l4\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.308737 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.311470 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-chcch"] Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.312367 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.318827 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.324666 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chcch"] Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.377620 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-catalog-content\") pod \"community-operators-k49l4\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.377675 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-catalog-content\") pod \"certified-operators-chcch\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.377703 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x682b\" (UniqueName: \"kubernetes.io/projected/dda74b1c-bf42-4091-af37-e29e1494b2a6-kube-api-access-x682b\") pod \"certified-operators-chcch\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.377749 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-utilities\") pod \"certified-operators-chcch\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.377808 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-utilities\") pod \"community-operators-k49l4\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.377842 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6d69\" (UniqueName: \"kubernetes.io/projected/505c4d65-7e70-43ba-ae57-4660f944e1dc-kube-api-access-r6d69\") pod \"community-operators-k49l4\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.378553 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-catalog-content\") pod \"community-operators-k49l4\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.380908 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-utilities\") pod \"community-operators-k49l4\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.404062 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6d69\" (UniqueName: \"kubernetes.io/projected/505c4d65-7e70-43ba-ae57-4660f944e1dc-kube-api-access-r6d69\") pod \"community-operators-k49l4\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.478643 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-catalog-content\") pod \"certified-operators-chcch\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.478687 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x682b\" (UniqueName: \"kubernetes.io/projected/dda74b1c-bf42-4091-af37-e29e1494b2a6-kube-api-access-x682b\") pod \"certified-operators-chcch\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.478721 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-utilities\") pod \"certified-operators-chcch\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.481276 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-utilities\") pod \"certified-operators-chcch\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.481651 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-catalog-content\") pod \"certified-operators-chcch\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.484482 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.501985 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-49qfk"] Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.516554 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49qfk"] Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.516672 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.570322 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x682b\" (UniqueName: \"kubernetes.io/projected/dda74b1c-bf42-4091-af37-e29e1494b2a6-kube-api-access-x682b\") pod \"certified-operators-chcch\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.579975 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb4sl\" (UniqueName: \"kubernetes.io/projected/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-kube-api-access-kb4sl\") pod \"community-operators-49qfk\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.580026 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-catalog-content\") pod \"community-operators-49qfk\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.580127 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-utilities\") pod \"community-operators-49qfk\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.646629 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.681441 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb4sl\" (UniqueName: \"kubernetes.io/projected/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-kube-api-access-kb4sl\") pod \"community-operators-49qfk\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.681498 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-catalog-content\") pod \"community-operators-49qfk\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.681560 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-utilities\") pod \"community-operators-49qfk\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.681978 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-utilities\") pod \"community-operators-49qfk\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.682580 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-catalog-content\") pod \"community-operators-49qfk\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.744099 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.744611 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nqtg7"] Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.744645 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xb5xt"] Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.745410 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xb5xt"] Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.745481 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.771873 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb4sl\" (UniqueName: \"kubernetes.io/projected/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-kube-api-access-kb4sl\") pod \"community-operators-49qfk\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.782946 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-catalog-content\") pod \"certified-operators-xb5xt\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.783014 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cfhc\" (UniqueName: \"kubernetes.io/projected/b4b646bd-b87f-45fc-8142-cef150cda498-kube-api-access-8cfhc\") pod \"certified-operators-xb5xt\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.783032 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-utilities\") pod \"certified-operators-xb5xt\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.821278 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:43 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:43 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:43 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.821356 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.884521 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.885019 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-catalog-content\") pod \"certified-operators-xb5xt\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.885072 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cfhc\" (UniqueName: \"kubernetes.io/projected/b4b646bd-b87f-45fc-8142-cef150cda498-kube-api-access-8cfhc\") pod \"certified-operators-xb5xt\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.885094 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-utilities\") pod \"certified-operators-xb5xt\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.885442 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-catalog-content\") pod \"certified-operators-xb5xt\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.885763 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-utilities\") pod \"certified-operators-xb5xt\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:48:43 crc kubenswrapper[4810]: I0110 06:48:43.921034 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cfhc\" (UniqueName: \"kubernetes.io/projected/b4b646bd-b87f-45fc-8142-cef150cda498-kube-api-access-8cfhc\") pod \"certified-operators-xb5xt\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.001216 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k49l4"] Jan 10 06:48:44 crc kubenswrapper[4810]: W0110 06:48:44.019022 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod505c4d65_7e70_43ba_ae57_4660f944e1dc.slice/crio-59fc5c41c39ccbe4da8b7e4f327b520d7219d6394f43193ca3894f3a0b87b994 WatchSource:0}: Error finding container 59fc5c41c39ccbe4da8b7e4f327b520d7219d6394f43193ca3894f3a0b87b994: Status 404 returned error can't find the container with id 59fc5c41c39ccbe4da8b7e4f327b520d7219d6394f43193ca3894f3a0b87b994 Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.100602 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.111424 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" event={"ID":"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f","Type":"ContainerStarted","Data":"71f52db0bc2f94784c2ff34bef2a87edb8751982996d05190ab19d294f7d8516"} Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.111460 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" event={"ID":"abaa9d2a-1e13-4aa3-aca6-fd759a41d46f","Type":"ContainerStarted","Data":"f185e5ee690de57ecb9fd848b42152e2575f3fe1512ef66cbbba686a1923cba0"} Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.125397 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" event={"ID":"f8d04d07-f61c-4c6c-9038-7cca9d199ede","Type":"ContainerStarted","Data":"f3a82cad5c7daf1d945f300fb0192a8cbfa83267deae96028446f1789e39ae0b"} Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.127435 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chcch"] Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.139491 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k49l4" event={"ID":"505c4d65-7e70-43ba-ae57-4660f944e1dc","Type":"ContainerStarted","Data":"59fc5c41c39ccbe4da8b7e4f327b520d7219d6394f43193ca3894f3a0b87b994"} Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.139556 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p68qs" Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.143677 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-6bmqz" podStartSLOduration=11.143648558 podStartE2EDuration="11.143648558s" podCreationTimestamp="2026-01-10 06:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:44.143026961 +0000 UTC m=+152.758519854" watchObservedRunningTime="2026-01-10 06:48:44.143648558 +0000 UTC m=+152.759141441" Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.256918 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49qfk"] Jan 10 06:48:44 crc kubenswrapper[4810]: W0110 06:48:44.264158 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0016a5b8_02cc_4e7f_9eb9_145e3f75e669.slice/crio-f89e9a9828f38d65a301d728c94e530c36c058f736a352428428e047eec3219e WatchSource:0}: Error finding container f89e9a9828f38d65a301d728c94e530c36c058f736a352428428e047eec3219e: Status 404 returned error can't find the container with id f89e9a9828f38d65a301d728c94e530c36c058f736a352428428e047eec3219e Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.324659 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xb5xt"] Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.821670 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:44 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:44 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:44 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.821741 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.856592 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.857746 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.859908 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.861826 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.869914 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.901061 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22998ebb-8658-48f0-9ad7-3a03bcd26812-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22998ebb-8658-48f0-9ad7-3a03bcd26812\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 10 06:48:44 crc kubenswrapper[4810]: I0110 06:48:44.901114 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22998ebb-8658-48f0-9ad7-3a03bcd26812-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22998ebb-8658-48f0-9ad7-3a03bcd26812\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.003089 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22998ebb-8658-48f0-9ad7-3a03bcd26812-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22998ebb-8658-48f0-9ad7-3a03bcd26812\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.003400 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22998ebb-8658-48f0-9ad7-3a03bcd26812-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22998ebb-8658-48f0-9ad7-3a03bcd26812\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.003532 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22998ebb-8658-48f0-9ad7-3a03bcd26812-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22998ebb-8658-48f0-9ad7-3a03bcd26812\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.029377 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22998ebb-8658-48f0-9ad7-3a03bcd26812-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22998ebb-8658-48f0-9ad7-3a03bcd26812\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.136873 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" event={"ID":"f8d04d07-f61c-4c6c-9038-7cca9d199ede","Type":"ContainerStarted","Data":"ea6b03e99481b35195ac9d8a10f516eb2006803b1df41d240877f0640643b0ee"} Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.138788 4810 generic.go:334] "Generic (PLEG): container finished" podID="505c4d65-7e70-43ba-ae57-4660f944e1dc" containerID="856e698e1b0a7125fffb0183aeefb22bb1ed3a88264fb678d1e91a96b36bafdf" exitCode=0 Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.138877 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k49l4" event={"ID":"505c4d65-7e70-43ba-ae57-4660f944e1dc","Type":"ContainerDied","Data":"856e698e1b0a7125fffb0183aeefb22bb1ed3a88264fb678d1e91a96b36bafdf"} Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.140309 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb5xt" event={"ID":"b4b646bd-b87f-45fc-8142-cef150cda498","Type":"ContainerStarted","Data":"ef226f551b831fb9a94797bd4f5d304b49bd6c2d46130a5c0f69083f1bfe260e"} Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.142534 4810 generic.go:334] "Generic (PLEG): container finished" podID="dda74b1c-bf42-4091-af37-e29e1494b2a6" containerID="528bd4c6c541efe1e3b5e84f35a965deaa3a9f7fa7ba8e58864b31b6599c3299" exitCode=0 Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.142625 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chcch" event={"ID":"dda74b1c-bf42-4091-af37-e29e1494b2a6","Type":"ContainerDied","Data":"528bd4c6c541efe1e3b5e84f35a965deaa3a9f7fa7ba8e58864b31b6599c3299"} Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.142656 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chcch" event={"ID":"dda74b1c-bf42-4091-af37-e29e1494b2a6","Type":"ContainerStarted","Data":"3ed9b7f5ffcb0aa09396ec532e92cd9b309b05489f18ad385596a9b58d16c797"} Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.144907 4810 generic.go:334] "Generic (PLEG): container finished" podID="d24f1d85-f530-4f1c-ae79-db5d7b273d6c" containerID="3392bfb749ed91d9818b7bcd32c553a668584f7a7ae273a619a95c497291e5b6" exitCode=0 Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.144985 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" event={"ID":"d24f1d85-f530-4f1c-ae79-db5d7b273d6c","Type":"ContainerDied","Data":"3392bfb749ed91d9818b7bcd32c553a668584f7a7ae273a619a95c497291e5b6"} Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.148918 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49qfk" event={"ID":"0016a5b8-02cc-4e7f-9eb9-145e3f75e669","Type":"ContainerStarted","Data":"a3f61eaa09546c7fc1fdaf92ad9da340cd595bf06a51b14aa660021ef172a23b"} Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.149041 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49qfk" event={"ID":"0016a5b8-02cc-4e7f-9eb9-145e3f75e669","Type":"ContainerStarted","Data":"f89e9a9828f38d65a301d728c94e530c36c058f736a352428428e047eec3219e"} Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.175886 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.308031 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t4lxp"] Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.309441 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.311894 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.330969 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4lxp"] Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.408184 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-utilities\") pod \"redhat-marketplace-t4lxp\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.408256 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.408285 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p24l\" (UniqueName: \"kubernetes.io/projected/6a011bb8-70ef-4c9b-b5cd-642600d792b4-kube-api-access-4p24l\") pod \"redhat-marketplace-t4lxp\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.408412 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-catalog-content\") pod \"redhat-marketplace-t4lxp\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.514277 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p24l\" (UniqueName: \"kubernetes.io/projected/6a011bb8-70ef-4c9b-b5cd-642600d792b4-kube-api-access-4p24l\") pod \"redhat-marketplace-t4lxp\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.514516 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-catalog-content\") pod \"redhat-marketplace-t4lxp\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.514590 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-utilities\") pod \"redhat-marketplace-t4lxp\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.514994 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-utilities\") pod \"redhat-marketplace-t4lxp\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.515228 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-catalog-content\") pod \"redhat-marketplace-t4lxp\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.539135 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p24l\" (UniqueName: \"kubernetes.io/projected/6a011bb8-70ef-4c9b-b5cd-642600d792b4-kube-api-access-4p24l\") pod \"redhat-marketplace-t4lxp\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.633749 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.735688 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-645sk"] Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.737217 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-645sk"] Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.737514 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.744052 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.744213 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.772131 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.820884 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:45 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:45 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:45 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.821006 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.832947 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s629q\" (UniqueName: \"kubernetes.io/projected/c0fc4503-794c-403e-a6f7-e18be2410845-kube-api-access-s629q\") pod \"redhat-marketplace-645sk\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.832983 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-utilities\") pod \"redhat-marketplace-645sk\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.833000 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-catalog-content\") pod \"redhat-marketplace-645sk\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.851853 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.856448 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.868942 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4lxp"] Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.934664 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s629q\" (UniqueName: \"kubernetes.io/projected/c0fc4503-794c-403e-a6f7-e18be2410845-kube-api-access-s629q\") pod \"redhat-marketplace-645sk\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.934739 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-utilities\") pod \"redhat-marketplace-645sk\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.934793 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-catalog-content\") pod \"redhat-marketplace-645sk\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.935393 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-catalog-content\") pod \"redhat-marketplace-645sk\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.935478 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-utilities\") pod \"redhat-marketplace-645sk\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.942080 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-xw9ln container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.942139 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xw9ln" podUID="845a1b6a-e7d4-467d-a835-053708fed54f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.942171 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-xw9ln container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.942209 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xw9ln" podUID="845a1b6a-e7d4-467d-a835-053708fed54f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.952919 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.953267 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.954341 4810 patch_prober.go:28] interesting pod/console-f9d7485db-mh9w2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.954376 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-mh9w2" podUID="1ad4afbe-31ce-4f61-866c-390b33da3bbb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 10 06:48:45 crc kubenswrapper[4810]: I0110 06:48:45.960830 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s629q\" (UniqueName: \"kubernetes.io/projected/c0fc4503-794c-403e-a6f7-e18be2410845-kube-api-access-s629q\") pod \"redhat-marketplace-645sk\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.068376 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.161423 4810 generic.go:334] "Generic (PLEG): container finished" podID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerID="a3f61eaa09546c7fc1fdaf92ad9da340cd595bf06a51b14aa660021ef172a23b" exitCode=0 Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.161472 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49qfk" event={"ID":"0016a5b8-02cc-4e7f-9eb9-145e3f75e669","Type":"ContainerDied","Data":"a3f61eaa09546c7fc1fdaf92ad9da340cd595bf06a51b14aa660021ef172a23b"} Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.162527 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"22998ebb-8658-48f0-9ad7-3a03bcd26812","Type":"ContainerStarted","Data":"d7ff700d96fde76aed7b4ecf84940d47fd55a79e3175682713e8ccd6df51a292"} Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.164694 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.165648 4810 generic.go:334] "Generic (PLEG): container finished" podID="b4b646bd-b87f-45fc-8142-cef150cda498" containerID="988f8780afbff9d574d3c686bfc8909def34af50741e7c3d10598b7dba440d37" exitCode=0 Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.165709 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb5xt" event={"ID":"b4b646bd-b87f-45fc-8142-cef150cda498","Type":"ContainerDied","Data":"988f8780afbff9d574d3c686bfc8909def34af50741e7c3d10598b7dba440d37"} Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.169358 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4lxp" event={"ID":"6a011bb8-70ef-4c9b-b5cd-642600d792b4","Type":"ContainerStarted","Data":"dce29c8470a8ae72b4ce9b77cfb85ac20f2f27177871407826165cddf71e2074"} Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.178644 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wf9f4" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.180443 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-9dnp6" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.255717 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" podStartSLOduration=135.255700094 podStartE2EDuration="2m15.255700094s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:46.255032757 +0000 UTC m=+154.870525640" watchObservedRunningTime="2026-01-10 06:48:46.255700094 +0000 UTC m=+154.871192967" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.345284 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qbmz8"] Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.346170 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.357091 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.369571 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-645sk"] Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.376593 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qbmz8"] Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.445787 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-catalog-content\") pod \"redhat-operators-qbmz8\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.445851 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-utilities\") pod \"redhat-operators-qbmz8\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.445893 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knfbm\" (UniqueName: \"kubernetes.io/projected/3fa3125f-8e13-430b-8227-39bd4c3e011b-kube-api-access-knfbm\") pod \"redhat-operators-qbmz8\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.546598 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-utilities\") pod \"redhat-operators-qbmz8\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.546667 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knfbm\" (UniqueName: \"kubernetes.io/projected/3fa3125f-8e13-430b-8227-39bd4c3e011b-kube-api-access-knfbm\") pod \"redhat-operators-qbmz8\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.546716 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-catalog-content\") pod \"redhat-operators-qbmz8\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.547216 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-catalog-content\") pod \"redhat-operators-qbmz8\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.547418 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-utilities\") pod \"redhat-operators-qbmz8\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.563244 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knfbm\" (UniqueName: \"kubernetes.io/projected/3fa3125f-8e13-430b-8227-39bd4c3e011b-kube-api-access-knfbm\") pod \"redhat-operators-qbmz8\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.675078 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.696258 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rtfb8"] Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.697470 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.703767 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtfb8"] Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.749249 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-catalog-content\") pod \"redhat-operators-rtfb8\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.749293 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-utilities\") pod \"redhat-operators-rtfb8\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.749375 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clvl9\" (UniqueName: \"kubernetes.io/projected/78118904-197b-4ed9-bd6e-c02dc35f4e94-kube-api-access-clvl9\") pod \"redhat-operators-rtfb8\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.818724 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.821615 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:46 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:46 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:46 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.821669 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.850989 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clvl9\" (UniqueName: \"kubernetes.io/projected/78118904-197b-4ed9-bd6e-c02dc35f4e94-kube-api-access-clvl9\") pod \"redhat-operators-rtfb8\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.851230 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-utilities\") pod \"redhat-operators-rtfb8\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.851266 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-catalog-content\") pod \"redhat-operators-rtfb8\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.851813 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-utilities\") pod \"redhat-operators-rtfb8\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.851925 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-catalog-content\") pod \"redhat-operators-rtfb8\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:48:46 crc kubenswrapper[4810]: I0110 06:48:46.888176 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clvl9\" (UniqueName: \"kubernetes.io/projected/78118904-197b-4ed9-bd6e-c02dc35f4e94-kube-api-access-clvl9\") pod \"redhat-operators-rtfb8\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.057358 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.176345 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-645sk" event={"ID":"c0fc4503-794c-403e-a6f7-e18be2410845","Type":"ContainerStarted","Data":"1e2aaf5408d0c8035a19422c5f6f02cedbdcc1dcf122a02f950b05e8eb15d37f"} Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.520318 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.521178 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.523694 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.523868 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.530488 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.661748 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.661817 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.763297 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.763404 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.763431 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.787707 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.820826 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:47 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:47 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:47 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.820880 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:47 crc kubenswrapper[4810]: I0110 06:48:47.846466 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 10 06:48:48 crc kubenswrapper[4810]: I0110 06:48:48.823590 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:48 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:48 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:48 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:48 crc kubenswrapper[4810]: I0110 06:48:48.823671 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:49 crc kubenswrapper[4810]: I0110 06:48:49.821878 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:49 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:49 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:49 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:49 crc kubenswrapper[4810]: I0110 06:48:49.822285 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.107715 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.216551 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-config-volume\") pod \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.216941 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-secret-volume\") pod \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.216987 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dnxf\" (UniqueName: \"kubernetes.io/projected/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-kube-api-access-5dnxf\") pod \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\" (UID: \"d24f1d85-f530-4f1c-ae79-db5d7b273d6c\") " Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.224804 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-kube-api-access-5dnxf" (OuterVolumeSpecName: "kube-api-access-5dnxf") pod "d24f1d85-f530-4f1c-ae79-db5d7b273d6c" (UID: "d24f1d85-f530-4f1c-ae79-db5d7b273d6c"). InnerVolumeSpecName "kube-api-access-5dnxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.227871 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4lxp" event={"ID":"6a011bb8-70ef-4c9b-b5cd-642600d792b4","Type":"ContainerStarted","Data":"35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29"} Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.229131 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"22998ebb-8658-48f0-9ad7-3a03bcd26812","Type":"ContainerStarted","Data":"a8f09a5f733107e160173e1323ca999d3dd1aac6e3b1efc96f1d72301f378d39"} Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.230473 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" event={"ID":"d24f1d85-f530-4f1c-ae79-db5d7b273d6c","Type":"ContainerDied","Data":"7185e21c62984b8b3b2444d00c84edc6a83bb41dd3fa5923722f6cc795862902"} Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.230502 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7185e21c62984b8b3b2444d00c84edc6a83bb41dd3fa5923722f6cc795862902" Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.230549 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467125-g742d" Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.233624 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d24f1d85-f530-4f1c-ae79-db5d7b273d6c" (UID: "d24f1d85-f530-4f1c-ae79-db5d7b273d6c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.317826 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.317860 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dnxf\" (UniqueName: \"kubernetes.io/projected/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-kube-api-access-5dnxf\") on node \"crc\" DevicePath \"\"" Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.492162 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-config-volume" (OuterVolumeSpecName: "config-volume") pod "d24f1d85-f530-4f1c-ae79-db5d7b273d6c" (UID: "d24f1d85-f530-4f1c-ae79-db5d7b273d6c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.520448 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d24f1d85-f530-4f1c-ae79-db5d7b273d6c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.521257 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 10 06:48:50 crc kubenswrapper[4810]: W0110 06:48:50.546685 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod11b20d8b_48e4_44ca_b17a_a4fcfcfca21e.slice/crio-06f5e958fae251618f3d3f404ae4286fdccfcd08a9b09c5795f729681c4c94c3 WatchSource:0}: Error finding container 06f5e958fae251618f3d3f404ae4286fdccfcd08a9b09c5795f729681c4c94c3: Status 404 returned error can't find the container with id 06f5e958fae251618f3d3f404ae4286fdccfcd08a9b09c5795f729681c4c94c3 Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.677304 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtfb8"] Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.678297 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qbmz8"] Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.821717 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:50 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:50 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:50 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.821790 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.883256 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 06:48:50 crc kubenswrapper[4810]: I0110 06:48:50.883309 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.236612 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e","Type":"ContainerStarted","Data":"06f5e958fae251618f3d3f404ae4286fdccfcd08a9b09c5795f729681c4c94c3"} Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.237783 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfb8" event={"ID":"78118904-197b-4ed9-bd6e-c02dc35f4e94","Type":"ContainerStarted","Data":"be2f681d8293407fa187e5d162da0551d263cf572d236de42ce71c7deb762e2d"} Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.239728 4810 generic.go:334] "Generic (PLEG): container finished" podID="22998ebb-8658-48f0-9ad7-3a03bcd26812" containerID="a8f09a5f733107e160173e1323ca999d3dd1aac6e3b1efc96f1d72301f378d39" exitCode=0 Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.239775 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"22998ebb-8658-48f0-9ad7-3a03bcd26812","Type":"ContainerDied","Data":"a8f09a5f733107e160173e1323ca999d3dd1aac6e3b1efc96f1d72301f378d39"} Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.240837 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbmz8" event={"ID":"3fa3125f-8e13-430b-8227-39bd4c3e011b","Type":"ContainerStarted","Data":"abf4f9f67f60b6bd29899071791c1559930d9118804f87544a4761f661241ca7"} Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.243516 4810 generic.go:334] "Generic (PLEG): container finished" podID="c0fc4503-794c-403e-a6f7-e18be2410845" containerID="eb3ca0879c2eb80cee833a7bfe3f65d91088ff662fd300b4334e2fbdb70dcf9d" exitCode=0 Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.243541 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-645sk" event={"ID":"c0fc4503-794c-403e-a6f7-e18be2410845","Type":"ContainerDied","Data":"eb3ca0879c2eb80cee833a7bfe3f65d91088ff662fd300b4334e2fbdb70dcf9d"} Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.245272 4810 generic.go:334] "Generic (PLEG): container finished" podID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" containerID="35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29" exitCode=0 Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.245333 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4lxp" event={"ID":"6a011bb8-70ef-4c9b-b5cd-642600d792b4","Type":"ContainerDied","Data":"35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29"} Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.821283 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:51 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:51 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:51 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.821619 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:51 crc kubenswrapper[4810]: I0110 06:48:51.869112 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qxj7l" Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.251735 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e","Type":"ContainerStarted","Data":"56b524a884896564b007eaeada5ef6241506206701d562d707eeaa02d81a33e1"} Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.446620 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.453242 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6741fd18-31c0-4bc3-be74-c0f6080c67af-metrics-certs\") pod \"network-metrics-daemon-9nv84\" (UID: \"6741fd18-31c0-4bc3-be74-c0f6080c67af\") " pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.499684 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.529032 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9nv84" Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.653021 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22998ebb-8658-48f0-9ad7-3a03bcd26812-kubelet-dir\") pod \"22998ebb-8658-48f0-9ad7-3a03bcd26812\" (UID: \"22998ebb-8658-48f0-9ad7-3a03bcd26812\") " Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.653062 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22998ebb-8658-48f0-9ad7-3a03bcd26812-kube-api-access\") pod \"22998ebb-8658-48f0-9ad7-3a03bcd26812\" (UID: \"22998ebb-8658-48f0-9ad7-3a03bcd26812\") " Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.653703 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22998ebb-8658-48f0-9ad7-3a03bcd26812-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "22998ebb-8658-48f0-9ad7-3a03bcd26812" (UID: "22998ebb-8658-48f0-9ad7-3a03bcd26812"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.658643 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22998ebb-8658-48f0-9ad7-3a03bcd26812-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "22998ebb-8658-48f0-9ad7-3a03bcd26812" (UID: "22998ebb-8658-48f0-9ad7-3a03bcd26812"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.753336 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.754064 4810 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22998ebb-8658-48f0-9ad7-3a03bcd26812-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.754086 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22998ebb-8658-48f0-9ad7-3a03bcd26812-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.820375 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:52 crc kubenswrapper[4810]: [-]has-synced failed: reason withheld Jan 10 06:48:52 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:52 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.820436 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:52 crc kubenswrapper[4810]: I0110 06:48:52.955937 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9nv84"] Jan 10 06:48:53 crc kubenswrapper[4810]: I0110 06:48:53.258800 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfb8" event={"ID":"78118904-197b-4ed9-bd6e-c02dc35f4e94","Type":"ContainerStarted","Data":"e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022"} Jan 10 06:48:53 crc kubenswrapper[4810]: I0110 06:48:53.260485 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"22998ebb-8658-48f0-9ad7-3a03bcd26812","Type":"ContainerDied","Data":"d7ff700d96fde76aed7b4ecf84940d47fd55a79e3175682713e8ccd6df51a292"} Jan 10 06:48:53 crc kubenswrapper[4810]: I0110 06:48:53.260551 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7ff700d96fde76aed7b4ecf84940d47fd55a79e3175682713e8ccd6df51a292" Jan 10 06:48:53 crc kubenswrapper[4810]: I0110 06:48:53.260511 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 10 06:48:53 crc kubenswrapper[4810]: I0110 06:48:53.262309 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9nv84" event={"ID":"6741fd18-31c0-4bc3-be74-c0f6080c67af","Type":"ContainerStarted","Data":"ba5d4638ac433ecb75c73f05cbf51f0257a9d4dddae1c534d1a49cdd2dc7eea3"} Jan 10 06:48:53 crc kubenswrapper[4810]: I0110 06:48:53.263783 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbmz8" event={"ID":"3fa3125f-8e13-430b-8227-39bd4c3e011b","Type":"ContainerStarted","Data":"b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6"} Jan 10 06:48:53 crc kubenswrapper[4810]: I0110 06:48:53.275986 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:48:53 crc kubenswrapper[4810]: I0110 06:48:53.279465 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=6.279450995 podStartE2EDuration="6.279450995s" podCreationTimestamp="2026-01-10 06:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:48:53.276901817 +0000 UTC m=+161.892394700" watchObservedRunningTime="2026-01-10 06:48:53.279450995 +0000 UTC m=+161.894943888" Jan 10 06:48:53 crc kubenswrapper[4810]: I0110 06:48:53.822366 4810 patch_prober.go:28] interesting pod/router-default-5444994796-qhr8m container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 10 06:48:53 crc kubenswrapper[4810]: [+]has-synced ok Jan 10 06:48:53 crc kubenswrapper[4810]: [+]process-running ok Jan 10 06:48:53 crc kubenswrapper[4810]: healthz check failed Jan 10 06:48:53 crc kubenswrapper[4810]: I0110 06:48:53.822445 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qhr8m" podUID="cbc78376-08c0-4d09-a627-ca17eec0ceb3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:48:54 crc kubenswrapper[4810]: I0110 06:48:54.269808 4810 generic.go:334] "Generic (PLEG): container finished" podID="3fa3125f-8e13-430b-8227-39bd4c3e011b" containerID="b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6" exitCode=0 Jan 10 06:48:54 crc kubenswrapper[4810]: I0110 06:48:54.269906 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbmz8" event={"ID":"3fa3125f-8e13-430b-8227-39bd4c3e011b","Type":"ContainerDied","Data":"b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6"} Jan 10 06:48:54 crc kubenswrapper[4810]: I0110 06:48:54.271166 4810 generic.go:334] "Generic (PLEG): container finished" podID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerID="e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022" exitCode=0 Jan 10 06:48:54 crc kubenswrapper[4810]: I0110 06:48:54.271205 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfb8" event={"ID":"78118904-197b-4ed9-bd6e-c02dc35f4e94","Type":"ContainerDied","Data":"e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022"} Jan 10 06:48:54 crc kubenswrapper[4810]: I0110 06:48:54.821811 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:54 crc kubenswrapper[4810]: I0110 06:48:54.826062 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qhr8m" Jan 10 06:48:55 crc kubenswrapper[4810]: I0110 06:48:55.287922 4810 generic.go:334] "Generic (PLEG): container finished" podID="11b20d8b-48e4-44ca-b17a-a4fcfcfca21e" containerID="56b524a884896564b007eaeada5ef6241506206701d562d707eeaa02d81a33e1" exitCode=0 Jan 10 06:48:55 crc kubenswrapper[4810]: I0110 06:48:55.288002 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e","Type":"ContainerDied","Data":"56b524a884896564b007eaeada5ef6241506206701d562d707eeaa02d81a33e1"} Jan 10 06:48:55 crc kubenswrapper[4810]: I0110 06:48:55.290542 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9nv84" event={"ID":"6741fd18-31c0-4bc3-be74-c0f6080c67af","Type":"ContainerStarted","Data":"b1f0531a50e91d09ddf1b5c9558110833f764bd72f71eab1e1f12f07294386b6"} Jan 10 06:48:55 crc kubenswrapper[4810]: I0110 06:48:55.943136 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-xw9ln container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 10 06:48:55 crc kubenswrapper[4810]: I0110 06:48:55.943435 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xw9ln" podUID="845a1b6a-e7d4-467d-a835-053708fed54f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 10 06:48:55 crc kubenswrapper[4810]: I0110 06:48:55.943152 4810 patch_prober.go:28] interesting pod/downloads-7954f5f757-xw9ln container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 10 06:48:55 crc kubenswrapper[4810]: I0110 06:48:55.943501 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xw9ln" podUID="845a1b6a-e7d4-467d-a835-053708fed54f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 10 06:48:55 crc kubenswrapper[4810]: I0110 06:48:55.962633 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:55 crc kubenswrapper[4810]: I0110 06:48:55.969351 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-mh9w2" Jan 10 06:48:56 crc kubenswrapper[4810]: I0110 06:48:56.605483 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 10 06:48:56 crc kubenswrapper[4810]: I0110 06:48:56.712058 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kubelet-dir\") pod \"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e\" (UID: \"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e\") " Jan 10 06:48:56 crc kubenswrapper[4810]: I0110 06:48:56.712181 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kube-api-access\") pod \"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e\" (UID: \"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e\") " Jan 10 06:48:56 crc kubenswrapper[4810]: I0110 06:48:56.713053 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "11b20d8b-48e4-44ca-b17a-a4fcfcfca21e" (UID: "11b20d8b-48e4-44ca-b17a-a4fcfcfca21e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:48:56 crc kubenswrapper[4810]: I0110 06:48:56.734047 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "11b20d8b-48e4-44ca-b17a-a4fcfcfca21e" (UID: "11b20d8b-48e4-44ca-b17a-a4fcfcfca21e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:48:56 crc kubenswrapper[4810]: I0110 06:48:56.812985 4810 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 10 06:48:56 crc kubenswrapper[4810]: I0110 06:48:56.813005 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11b20d8b-48e4-44ca-b17a-a4fcfcfca21e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 10 06:48:57 crc kubenswrapper[4810]: I0110 06:48:57.303423 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"11b20d8b-48e4-44ca-b17a-a4fcfcfca21e","Type":"ContainerDied","Data":"06f5e958fae251618f3d3f404ae4286fdccfcd08a9b09c5795f729681c4c94c3"} Jan 10 06:48:57 crc kubenswrapper[4810]: I0110 06:48:57.303463 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06f5e958fae251618f3d3f404ae4286fdccfcd08a9b09c5795f729681c4c94c3" Jan 10 06:48:57 crc kubenswrapper[4810]: I0110 06:48:57.303515 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 10 06:49:00 crc kubenswrapper[4810]: I0110 06:49:00.329940 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9nv84" event={"ID":"6741fd18-31c0-4bc3-be74-c0f6080c67af","Type":"ContainerStarted","Data":"5fe92202c3e105f8499ba5cf2d41d963f56acc1aa4a0f186b7e109e8638fa690"} Jan 10 06:49:01 crc kubenswrapper[4810]: I0110 06:49:01.352944 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9nv84" podStartSLOduration=150.352928404 podStartE2EDuration="2m30.352928404s" podCreationTimestamp="2026-01-10 06:46:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:49:01.349646878 +0000 UTC m=+169.965139761" watchObservedRunningTime="2026-01-10 06:49:01.352928404 +0000 UTC m=+169.968421287" Jan 10 06:49:03 crc kubenswrapper[4810]: I0110 06:49:03.280418 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:49:05 crc kubenswrapper[4810]: I0110 06:49:05.967481 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xw9ln" Jan 10 06:49:16 crc kubenswrapper[4810]: I0110 06:49:16.806105 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4hwtr" Jan 10 06:49:20 crc kubenswrapper[4810]: I0110 06:49:20.882744 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 06:49:20 crc kubenswrapper[4810]: I0110 06:49:20.883426 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 06:49:23 crc kubenswrapper[4810]: I0110 06:49:23.050469 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 10 06:49:25 crc kubenswrapper[4810]: I0110 06:49:25.918716 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 10 06:49:25 crc kubenswrapper[4810]: E0110 06:49:25.919313 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24f1d85-f530-4f1c-ae79-db5d7b273d6c" containerName="collect-profiles" Jan 10 06:49:25 crc kubenswrapper[4810]: I0110 06:49:25.919328 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24f1d85-f530-4f1c-ae79-db5d7b273d6c" containerName="collect-profiles" Jan 10 06:49:25 crc kubenswrapper[4810]: E0110 06:49:25.919341 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b20d8b-48e4-44ca-b17a-a4fcfcfca21e" containerName="pruner" Jan 10 06:49:25 crc kubenswrapper[4810]: I0110 06:49:25.919347 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b20d8b-48e4-44ca-b17a-a4fcfcfca21e" containerName="pruner" Jan 10 06:49:25 crc kubenswrapper[4810]: E0110 06:49:25.919363 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22998ebb-8658-48f0-9ad7-3a03bcd26812" containerName="pruner" Jan 10 06:49:25 crc kubenswrapper[4810]: I0110 06:49:25.919370 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="22998ebb-8658-48f0-9ad7-3a03bcd26812" containerName="pruner" Jan 10 06:49:25 crc kubenswrapper[4810]: I0110 06:49:25.919459 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="22998ebb-8658-48f0-9ad7-3a03bcd26812" containerName="pruner" Jan 10 06:49:25 crc kubenswrapper[4810]: I0110 06:49:25.919469 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d24f1d85-f530-4f1c-ae79-db5d7b273d6c" containerName="collect-profiles" Jan 10 06:49:25 crc kubenswrapper[4810]: I0110 06:49:25.919477 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b20d8b-48e4-44ca-b17a-a4fcfcfca21e" containerName="pruner" Jan 10 06:49:25 crc kubenswrapper[4810]: I0110 06:49:25.920720 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 10 06:49:25 crc kubenswrapper[4810]: I0110 06:49:25.923552 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 10 06:49:25 crc kubenswrapper[4810]: I0110 06:49:25.924276 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 10 06:49:25 crc kubenswrapper[4810]: I0110 06:49:25.935758 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 10 06:49:26 crc kubenswrapper[4810]: I0110 06:49:26.049917 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85307182-5fab-4720-92e7-8479a363d94c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"85307182-5fab-4720-92e7-8479a363d94c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 10 06:49:26 crc kubenswrapper[4810]: I0110 06:49:26.050304 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85307182-5fab-4720-92e7-8479a363d94c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"85307182-5fab-4720-92e7-8479a363d94c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 10 06:49:26 crc kubenswrapper[4810]: I0110 06:49:26.151871 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85307182-5fab-4720-92e7-8479a363d94c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"85307182-5fab-4720-92e7-8479a363d94c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 10 06:49:26 crc kubenswrapper[4810]: I0110 06:49:26.152039 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85307182-5fab-4720-92e7-8479a363d94c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"85307182-5fab-4720-92e7-8479a363d94c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 10 06:49:26 crc kubenswrapper[4810]: I0110 06:49:26.152175 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85307182-5fab-4720-92e7-8479a363d94c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"85307182-5fab-4720-92e7-8479a363d94c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 10 06:49:26 crc kubenswrapper[4810]: I0110 06:49:26.196012 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85307182-5fab-4720-92e7-8479a363d94c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"85307182-5fab-4720-92e7-8479a363d94c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 10 06:49:26 crc kubenswrapper[4810]: I0110 06:49:26.257526 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 10 06:49:29 crc kubenswrapper[4810]: I0110 06:49:29.921304 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 10 06:49:29 crc kubenswrapper[4810]: I0110 06:49:29.922358 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:49:29 crc kubenswrapper[4810]: I0110 06:49:29.938796 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 10 06:49:30 crc kubenswrapper[4810]: I0110 06:49:30.108284 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:49:30 crc kubenswrapper[4810]: I0110 06:49:30.108586 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b679c2df-44f6-4b2d-8204-2db8c95a3085-kube-api-access\") pod \"installer-9-crc\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:49:30 crc kubenswrapper[4810]: I0110 06:49:30.108712 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-var-lock\") pod \"installer-9-crc\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:49:30 crc kubenswrapper[4810]: I0110 06:49:30.210682 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-var-lock\") pod \"installer-9-crc\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:49:30 crc kubenswrapper[4810]: I0110 06:49:30.210801 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:49:30 crc kubenswrapper[4810]: I0110 06:49:30.210845 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b679c2df-44f6-4b2d-8204-2db8c95a3085-kube-api-access\") pod \"installer-9-crc\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:49:30 crc kubenswrapper[4810]: I0110 06:49:30.211163 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:49:30 crc kubenswrapper[4810]: I0110 06:49:30.213178 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-var-lock\") pod \"installer-9-crc\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:49:30 crc kubenswrapper[4810]: I0110 06:49:30.232995 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b679c2df-44f6-4b2d-8204-2db8c95a3085-kube-api-access\") pod \"installer-9-crc\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:49:30 crc kubenswrapper[4810]: I0110 06:49:30.245860 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:49:32 crc kubenswrapper[4810]: E0110 06:49:32.589449 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 10 06:49:32 crc kubenswrapper[4810]: E0110 06:49:32.589626 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4p24l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-t4lxp_openshift-marketplace(6a011bb8-70ef-4c9b-b5cd-642600d792b4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 10 06:49:32 crc kubenswrapper[4810]: E0110 06:49:32.590793 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-t4lxp" podUID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" Jan 10 06:49:37 crc kubenswrapper[4810]: E0110 06:49:37.630169 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-t4lxp" podUID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" Jan 10 06:49:37 crc kubenswrapper[4810]: E0110 06:49:37.718873 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 10 06:49:37 crc kubenswrapper[4810]: E0110 06:49:37.719092 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s629q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-645sk_openshift-marketplace(c0fc4503-794c-403e-a6f7-e18be2410845): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 10 06:49:37 crc kubenswrapper[4810]: E0110 06:49:37.720265 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-645sk" podUID="c0fc4503-794c-403e-a6f7-e18be2410845" Jan 10 06:49:38 crc kubenswrapper[4810]: E0110 06:49:38.154747 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 10 06:49:38 crc kubenswrapper[4810]: E0110 06:49:38.154897 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb4sl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-49qfk_openshift-marketplace(0016a5b8-02cc-4e7f-9eb9-145e3f75e669): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 10 06:49:38 crc kubenswrapper[4810]: E0110 06:49:38.156130 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-49qfk" podUID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" Jan 10 06:49:49 crc kubenswrapper[4810]: E0110 06:49:49.101951 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 10 06:49:49 crc kubenswrapper[4810]: E0110 06:49:49.102705 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x682b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-chcch_openshift-marketplace(dda74b1c-bf42-4091-af37-e29e1494b2a6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 10 06:49:49 crc kubenswrapper[4810]: E0110 06:49:49.103983 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-chcch" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" Jan 10 06:49:49 crc kubenswrapper[4810]: E0110 06:49:49.158593 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 10 06:49:49 crc kubenswrapper[4810]: E0110 06:49:49.158935 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6d69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-k49l4_openshift-marketplace(505c4d65-7e70-43ba-ae57-4660f944e1dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 10 06:49:49 crc kubenswrapper[4810]: E0110 06:49:49.160078 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-k49l4" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" Jan 10 06:49:49 crc kubenswrapper[4810]: E0110 06:49:49.378608 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 10 06:49:49 crc kubenswrapper[4810]: E0110 06:49:49.378788 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8cfhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xb5xt_openshift-marketplace(b4b646bd-b87f-45fc-8142-cef150cda498): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 10 06:49:49 crc kubenswrapper[4810]: E0110 06:49:49.379933 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xb5xt" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" Jan 10 06:49:50 crc kubenswrapper[4810]: I0110 06:49:50.883390 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 06:49:50 crc kubenswrapper[4810]: I0110 06:49:50.883472 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 06:49:50 crc kubenswrapper[4810]: I0110 06:49:50.883519 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:49:50 crc kubenswrapper[4810]: I0110 06:49:50.884007 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208"} pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 10 06:49:50 crc kubenswrapper[4810]: I0110 06:49:50.884111 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" containerID="cri-o://936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208" gracePeriod=600 Jan 10 06:49:51 crc kubenswrapper[4810]: I0110 06:49:51.640691 4810 generic.go:334] "Generic (PLEG): container finished" podID="b5b79429-9259-412f-bab8-27865ab7029b" containerID="936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208" exitCode=0 Jan 10 06:49:51 crc kubenswrapper[4810]: I0110 06:49:51.640794 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerDied","Data":"936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208"} Jan 10 06:49:52 crc kubenswrapper[4810]: E0110 06:49:52.311090 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xb5xt" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" Jan 10 06:49:52 crc kubenswrapper[4810]: E0110 06:49:52.331817 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 10 06:49:52 crc kubenswrapper[4810]: E0110 06:49:52.331964 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knfbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-qbmz8_openshift-marketplace(3fa3125f-8e13-430b-8227-39bd4c3e011b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 10 06:49:52 crc kubenswrapper[4810]: E0110 06:49:52.333854 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-qbmz8" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" Jan 10 06:49:52 crc kubenswrapper[4810]: E0110 06:49:52.345763 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 10 06:49:52 crc kubenswrapper[4810]: E0110 06:49:52.345936 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clvl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rtfb8_openshift-marketplace(78118904-197b-4ed9-bd6e-c02dc35f4e94): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 10 06:49:52 crc kubenswrapper[4810]: E0110 06:49:52.347652 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rtfb8" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" Jan 10 06:49:52 crc kubenswrapper[4810]: I0110 06:49:52.766422 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 10 06:49:52 crc kubenswrapper[4810]: I0110 06:49:52.771429 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 10 06:49:52 crc kubenswrapper[4810]: W0110 06:49:52.773707 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod85307182_5fab_4720_92e7_8479a363d94c.slice/crio-ce7284efc46e69a1f2ee138c90b50096a9c99add4259e597b200cff86d6b651e WatchSource:0}: Error finding container ce7284efc46e69a1f2ee138c90b50096a9c99add4259e597b200cff86d6b651e: Status 404 returned error can't find the container with id ce7284efc46e69a1f2ee138c90b50096a9c99add4259e597b200cff86d6b651e Jan 10 06:49:52 crc kubenswrapper[4810]: W0110 06:49:52.802792 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb679c2df_44f6_4b2d_8204_2db8c95a3085.slice/crio-a5fea4a1bdc155c2e63a8e18794da4d47dd7e6d501223c5776898192f16904f1 WatchSource:0}: Error finding container a5fea4a1bdc155c2e63a8e18794da4d47dd7e6d501223c5776898192f16904f1: Status 404 returned error can't find the container with id a5fea4a1bdc155c2e63a8e18794da4d47dd7e6d501223c5776898192f16904f1 Jan 10 06:49:53 crc kubenswrapper[4810]: I0110 06:49:53.663065 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"fed5cc11661bddfe784aa76be4300a8d8ea3e2ff81f1fb4536922a245a7ce154"} Jan 10 06:49:53 crc kubenswrapper[4810]: I0110 06:49:53.664763 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"85307182-5fab-4720-92e7-8479a363d94c","Type":"ContainerStarted","Data":"ce7284efc46e69a1f2ee138c90b50096a9c99add4259e597b200cff86d6b651e"} Jan 10 06:49:53 crc kubenswrapper[4810]: I0110 06:49:53.665718 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b679c2df-44f6-4b2d-8204-2db8c95a3085","Type":"ContainerStarted","Data":"a5fea4a1bdc155c2e63a8e18794da4d47dd7e6d501223c5776898192f16904f1"} Jan 10 06:49:54 crc kubenswrapper[4810]: I0110 06:49:54.674157 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b679c2df-44f6-4b2d-8204-2db8c95a3085","Type":"ContainerStarted","Data":"daf9ca967c56726febada5d2be42a3631a8c3036893a3cf0fb54ac7b1f406865"} Jan 10 06:49:55 crc kubenswrapper[4810]: I0110 06:49:55.681146 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"85307182-5fab-4720-92e7-8479a363d94c","Type":"ContainerStarted","Data":"23abb211fe5cc8377cd8c6a74f55bbfd05006ad50f08fa5bec2e10e655ead468"} Jan 10 06:49:55 crc kubenswrapper[4810]: I0110 06:49:55.711026 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=30.711003108 podStartE2EDuration="30.711003108s" podCreationTimestamp="2026-01-10 06:49:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:49:55.708344643 +0000 UTC m=+224.323837536" watchObservedRunningTime="2026-01-10 06:49:55.711003108 +0000 UTC m=+224.326495991" Jan 10 06:49:55 crc kubenswrapper[4810]: I0110 06:49:55.727875 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=26.727852632 podStartE2EDuration="26.727852632s" podCreationTimestamp="2026-01-10 06:49:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:49:55.723849834 +0000 UTC m=+224.339342727" watchObservedRunningTime="2026-01-10 06:49:55.727852632 +0000 UTC m=+224.343345515" Jan 10 06:49:56 crc kubenswrapper[4810]: I0110 06:49:56.689609 4810 generic.go:334] "Generic (PLEG): container finished" podID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerID="a6ba7329fbe50415c39c1a5fdfb61cf9bd908b43d88a26ca62a97cee41fd3474" exitCode=0 Jan 10 06:49:56 crc kubenswrapper[4810]: I0110 06:49:56.689732 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49qfk" event={"ID":"0016a5b8-02cc-4e7f-9eb9-145e3f75e669","Type":"ContainerDied","Data":"a6ba7329fbe50415c39c1a5fdfb61cf9bd908b43d88a26ca62a97cee41fd3474"} Jan 10 06:49:56 crc kubenswrapper[4810]: I0110 06:49:56.695502 4810 generic.go:334] "Generic (PLEG): container finished" podID="c0fc4503-794c-403e-a6f7-e18be2410845" containerID="54d3b7f6a3724daa9e96b560aa53ea762a72a10c5b38a61bbd534501edcfedf1" exitCode=0 Jan 10 06:49:56 crc kubenswrapper[4810]: I0110 06:49:56.695562 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-645sk" event={"ID":"c0fc4503-794c-403e-a6f7-e18be2410845","Type":"ContainerDied","Data":"54d3b7f6a3724daa9e96b560aa53ea762a72a10c5b38a61bbd534501edcfedf1"} Jan 10 06:49:56 crc kubenswrapper[4810]: I0110 06:49:56.699388 4810 generic.go:334] "Generic (PLEG): container finished" podID="85307182-5fab-4720-92e7-8479a363d94c" containerID="23abb211fe5cc8377cd8c6a74f55bbfd05006ad50f08fa5bec2e10e655ead468" exitCode=0 Jan 10 06:49:56 crc kubenswrapper[4810]: I0110 06:49:56.699439 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"85307182-5fab-4720-92e7-8479a363d94c","Type":"ContainerDied","Data":"23abb211fe5cc8377cd8c6a74f55bbfd05006ad50f08fa5bec2e10e655ead468"} Jan 10 06:49:57 crc kubenswrapper[4810]: I0110 06:49:57.707029 4810 generic.go:334] "Generic (PLEG): container finished" podID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" containerID="ec5f16940b0bfe6cb69769c862934412705d929acc0a330b222782c698f13814" exitCode=0 Jan 10 06:49:57 crc kubenswrapper[4810]: I0110 06:49:57.707728 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4lxp" event={"ID":"6a011bb8-70ef-4c9b-b5cd-642600d792b4","Type":"ContainerDied","Data":"ec5f16940b0bfe6cb69769c862934412705d929acc0a330b222782c698f13814"} Jan 10 06:49:57 crc kubenswrapper[4810]: I0110 06:49:57.714587 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49qfk" event={"ID":"0016a5b8-02cc-4e7f-9eb9-145e3f75e669","Type":"ContainerStarted","Data":"a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1"} Jan 10 06:49:57 crc kubenswrapper[4810]: I0110 06:49:57.718632 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-645sk" event={"ID":"c0fc4503-794c-403e-a6f7-e18be2410845","Type":"ContainerStarted","Data":"8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9"} Jan 10 06:49:57 crc kubenswrapper[4810]: I0110 06:49:57.755449 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-49qfk" podStartSLOduration=3.570130841 podStartE2EDuration="1m14.755428117s" podCreationTimestamp="2026-01-10 06:48:43 +0000 UTC" firstStartedPulling="2026-01-10 06:48:46.164358759 +0000 UTC m=+154.779851652" lastFinishedPulling="2026-01-10 06:49:57.349656045 +0000 UTC m=+225.965148928" observedRunningTime="2026-01-10 06:49:57.753252494 +0000 UTC m=+226.368745387" watchObservedRunningTime="2026-01-10 06:49:57.755428117 +0000 UTC m=+226.370921010" Jan 10 06:49:57 crc kubenswrapper[4810]: I0110 06:49:57.773575 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-645sk" podStartSLOduration=6.90064846 podStartE2EDuration="1m12.773550402s" podCreationTimestamp="2026-01-10 06:48:45 +0000 UTC" firstStartedPulling="2026-01-10 06:48:51.245410311 +0000 UTC m=+159.860903214" lastFinishedPulling="2026-01-10 06:49:57.118312273 +0000 UTC m=+225.733805156" observedRunningTime="2026-01-10 06:49:57.770544748 +0000 UTC m=+226.386037631" watchObservedRunningTime="2026-01-10 06:49:57.773550402 +0000 UTC m=+226.389043285" Jan 10 06:49:57 crc kubenswrapper[4810]: I0110 06:49:57.997173 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 10 06:49:58 crc kubenswrapper[4810]: I0110 06:49:58.112318 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85307182-5fab-4720-92e7-8479a363d94c-kubelet-dir\") pod \"85307182-5fab-4720-92e7-8479a363d94c\" (UID: \"85307182-5fab-4720-92e7-8479a363d94c\") " Jan 10 06:49:58 crc kubenswrapper[4810]: I0110 06:49:58.112428 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85307182-5fab-4720-92e7-8479a363d94c-kube-api-access\") pod \"85307182-5fab-4720-92e7-8479a363d94c\" (UID: \"85307182-5fab-4720-92e7-8479a363d94c\") " Jan 10 06:49:58 crc kubenswrapper[4810]: I0110 06:49:58.112424 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85307182-5fab-4720-92e7-8479a363d94c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "85307182-5fab-4720-92e7-8479a363d94c" (UID: "85307182-5fab-4720-92e7-8479a363d94c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:49:58 crc kubenswrapper[4810]: I0110 06:49:58.112872 4810 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85307182-5fab-4720-92e7-8479a363d94c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 10 06:49:58 crc kubenswrapper[4810]: I0110 06:49:58.120279 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85307182-5fab-4720-92e7-8479a363d94c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "85307182-5fab-4720-92e7-8479a363d94c" (UID: "85307182-5fab-4720-92e7-8479a363d94c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:49:58 crc kubenswrapper[4810]: I0110 06:49:58.213341 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85307182-5fab-4720-92e7-8479a363d94c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 10 06:49:58 crc kubenswrapper[4810]: I0110 06:49:58.727801 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 10 06:49:58 crc kubenswrapper[4810]: I0110 06:49:58.732757 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"85307182-5fab-4720-92e7-8479a363d94c","Type":"ContainerDied","Data":"ce7284efc46e69a1f2ee138c90b50096a9c99add4259e597b200cff86d6b651e"} Jan 10 06:49:58 crc kubenswrapper[4810]: I0110 06:49:58.732863 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce7284efc46e69a1f2ee138c90b50096a9c99add4259e597b200cff86d6b651e" Jan 10 06:50:03 crc kubenswrapper[4810]: I0110 06:50:03.886055 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:50:03 crc kubenswrapper[4810]: I0110 06:50:03.886494 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:50:03 crc kubenswrapper[4810]: I0110 06:50:03.979075 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:50:04 crc kubenswrapper[4810]: I0110 06:50:04.847850 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:50:06 crc kubenswrapper[4810]: I0110 06:50:06.069023 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:50:06 crc kubenswrapper[4810]: I0110 06:50:06.069433 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:50:06 crc kubenswrapper[4810]: I0110 06:50:06.140411 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:50:06 crc kubenswrapper[4810]: I0110 06:50:06.898294 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:50:07 crc kubenswrapper[4810]: I0110 06:50:07.825114 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4lxp" event={"ID":"6a011bb8-70ef-4c9b-b5cd-642600d792b4","Type":"ContainerStarted","Data":"3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033"} Jan 10 06:50:08 crc kubenswrapper[4810]: I0110 06:50:08.932034 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49qfk"] Jan 10 06:50:08 crc kubenswrapper[4810]: I0110 06:50:08.932393 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-49qfk" podUID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerName="registry-server" containerID="cri-o://a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1" gracePeriod=2 Jan 10 06:50:09 crc kubenswrapper[4810]: I0110 06:50:09.138103 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-645sk"] Jan 10 06:50:09 crc kubenswrapper[4810]: I0110 06:50:09.138614 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-645sk" podUID="c0fc4503-794c-403e-a6f7-e18be2410845" containerName="registry-server" containerID="cri-o://8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9" gracePeriod=2 Jan 10 06:50:13 crc kubenswrapper[4810]: E0110 06:50:13.886850 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1 is running failed: container process not found" containerID="a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 06:50:13 crc kubenswrapper[4810]: E0110 06:50:13.887919 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1 is running failed: container process not found" containerID="a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 06:50:13 crc kubenswrapper[4810]: E0110 06:50:13.888427 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1 is running failed: container process not found" containerID="a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 06:50:13 crc kubenswrapper[4810]: E0110 06:50:13.888492 4810 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-49qfk" podUID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerName="registry-server" Jan 10 06:50:16 crc kubenswrapper[4810]: E0110 06:50:16.070188 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9 is running failed: container process not found" containerID="8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 06:50:16 crc kubenswrapper[4810]: E0110 06:50:16.071094 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9 is running failed: container process not found" containerID="8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 06:50:16 crc kubenswrapper[4810]: E0110 06:50:16.071713 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9 is running failed: container process not found" containerID="8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 06:50:16 crc kubenswrapper[4810]: E0110 06:50:16.071856 4810 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-645sk" podUID="c0fc4503-794c-403e-a6f7-e18be2410845" containerName="registry-server" Jan 10 06:50:16 crc kubenswrapper[4810]: I0110 06:50:16.722099 4810 generic.go:334] "Generic (PLEG): container finished" podID="c0fc4503-794c-403e-a6f7-e18be2410845" containerID="8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9" exitCode=0 Jan 10 06:50:16 crc kubenswrapper[4810]: I0110 06:50:16.722279 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-645sk" event={"ID":"c0fc4503-794c-403e-a6f7-e18be2410845","Type":"ContainerDied","Data":"8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9"} Jan 10 06:50:16 crc kubenswrapper[4810]: I0110 06:50:16.725388 4810 generic.go:334] "Generic (PLEG): container finished" podID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerID="a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1" exitCode=0 Jan 10 06:50:16 crc kubenswrapper[4810]: I0110 06:50:16.725454 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49qfk" event={"ID":"0016a5b8-02cc-4e7f-9eb9-145e3f75e669","Type":"ContainerDied","Data":"a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1"} Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.038756 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.042108 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.119700 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-catalog-content\") pod \"c0fc4503-794c-403e-a6f7-e18be2410845\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.119801 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-utilities\") pod \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.119883 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-utilities\") pod \"c0fc4503-794c-403e-a6f7-e18be2410845\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.119949 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s629q\" (UniqueName: \"kubernetes.io/projected/c0fc4503-794c-403e-a6f7-e18be2410845-kube-api-access-s629q\") pod \"c0fc4503-794c-403e-a6f7-e18be2410845\" (UID: \"c0fc4503-794c-403e-a6f7-e18be2410845\") " Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.119985 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb4sl\" (UniqueName: \"kubernetes.io/projected/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-kube-api-access-kb4sl\") pod \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.120139 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-catalog-content\") pod \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\" (UID: \"0016a5b8-02cc-4e7f-9eb9-145e3f75e669\") " Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.121792 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-utilities" (OuterVolumeSpecName: "utilities") pod "0016a5b8-02cc-4e7f-9eb9-145e3f75e669" (UID: "0016a5b8-02cc-4e7f-9eb9-145e3f75e669"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.124399 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-utilities" (OuterVolumeSpecName: "utilities") pod "c0fc4503-794c-403e-a6f7-e18be2410845" (UID: "c0fc4503-794c-403e-a6f7-e18be2410845"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.128126 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fc4503-794c-403e-a6f7-e18be2410845-kube-api-access-s629q" (OuterVolumeSpecName: "kube-api-access-s629q") pod "c0fc4503-794c-403e-a6f7-e18be2410845" (UID: "c0fc4503-794c-403e-a6f7-e18be2410845"). InnerVolumeSpecName "kube-api-access-s629q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.128381 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-kube-api-access-kb4sl" (OuterVolumeSpecName: "kube-api-access-kb4sl") pod "0016a5b8-02cc-4e7f-9eb9-145e3f75e669" (UID: "0016a5b8-02cc-4e7f-9eb9-145e3f75e669"). InnerVolumeSpecName "kube-api-access-kb4sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.156272 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0fc4503-794c-403e-a6f7-e18be2410845" (UID: "c0fc4503-794c-403e-a6f7-e18be2410845"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.182000 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0016a5b8-02cc-4e7f-9eb9-145e3f75e669" (UID: "0016a5b8-02cc-4e7f-9eb9-145e3f75e669"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.221547 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.221578 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.221591 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s629q\" (UniqueName: \"kubernetes.io/projected/c0fc4503-794c-403e-a6f7-e18be2410845-kube-api-access-s629q\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.221608 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb4sl\" (UniqueName: \"kubernetes.io/projected/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-kube-api-access-kb4sl\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.221620 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0016a5b8-02cc-4e7f-9eb9-145e3f75e669-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.221632 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0fc4503-794c-403e-a6f7-e18be2410845-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.736068 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-645sk" event={"ID":"c0fc4503-794c-403e-a6f7-e18be2410845","Type":"ContainerDied","Data":"1e2aaf5408d0c8035a19422c5f6f02cedbdcc1dcf122a02f950b05e8eb15d37f"} Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.736153 4810 scope.go:117] "RemoveContainer" containerID="8bf07403ae63e21c2ea1d8f333ddf5f92d011606fd786051242f9ac963b2d1e9" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.736089 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-645sk" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.740517 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49qfk" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.741300 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49qfk" event={"ID":"0016a5b8-02cc-4e7f-9eb9-145e3f75e669","Type":"ContainerDied","Data":"f89e9a9828f38d65a301d728c94e530c36c058f736a352428428e047eec3219e"} Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.795772 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t4lxp" podStartSLOduration=17.272690247 podStartE2EDuration="1m32.795746817s" podCreationTimestamp="2026-01-10 06:48:45 +0000 UTC" firstStartedPulling="2026-01-10 06:48:51.246596492 +0000 UTC m=+159.862089375" lastFinishedPulling="2026-01-10 06:50:06.769653022 +0000 UTC m=+235.385145945" observedRunningTime="2026-01-10 06:50:17.77184648 +0000 UTC m=+246.387339373" watchObservedRunningTime="2026-01-10 06:50:17.795746817 +0000 UTC m=+246.411239710" Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.799044 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49qfk"] Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.811747 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-49qfk"] Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.817711 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-645sk"] Jan 10 06:50:17 crc kubenswrapper[4810]: I0110 06:50:17.824753 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-645sk"] Jan 10 06:50:19 crc kubenswrapper[4810]: I0110 06:50:19.705243 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" path="/var/lib/kubelet/pods/0016a5b8-02cc-4e7f-9eb9-145e3f75e669/volumes" Jan 10 06:50:19 crc kubenswrapper[4810]: I0110 06:50:19.707096 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0fc4503-794c-403e-a6f7-e18be2410845" path="/var/lib/kubelet/pods/c0fc4503-794c-403e-a6f7-e18be2410845/volumes" Jan 10 06:50:24 crc kubenswrapper[4810]: I0110 06:50:24.427333 4810 scope.go:117] "RemoveContainer" containerID="54d3b7f6a3724daa9e96b560aa53ea762a72a10c5b38a61bbd534501edcfedf1" Jan 10 06:50:24 crc kubenswrapper[4810]: I0110 06:50:24.492401 4810 scope.go:117] "RemoveContainer" containerID="eb3ca0879c2eb80cee833a7bfe3f65d91088ff662fd300b4334e2fbdb70dcf9d" Jan 10 06:50:24 crc kubenswrapper[4810]: I0110 06:50:24.541240 4810 scope.go:117] "RemoveContainer" containerID="a9498ecc858af0a3c6825708f33fbe9baf1ff70ae47c306e925e2bd0e6619eb1" Jan 10 06:50:24 crc kubenswrapper[4810]: I0110 06:50:24.565625 4810 scope.go:117] "RemoveContainer" containerID="a6ba7329fbe50415c39c1a5fdfb61cf9bd908b43d88a26ca62a97cee41fd3474" Jan 10 06:50:24 crc kubenswrapper[4810]: I0110 06:50:24.613734 4810 scope.go:117] "RemoveContainer" containerID="a3f61eaa09546c7fc1fdaf92ad9da340cd595bf06a51b14aa660021ef172a23b" Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.634257 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.634659 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.770345 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.800087 4810 generic.go:334] "Generic (PLEG): container finished" podID="b4b646bd-b87f-45fc-8142-cef150cda498" containerID="d6a705968d516fa0198743fa7ba94ee7df3f7c67cf21c7e43f17d62f48c3d8cf" exitCode=0 Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.800144 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb5xt" event={"ID":"b4b646bd-b87f-45fc-8142-cef150cda498","Type":"ContainerDied","Data":"d6a705968d516fa0198743fa7ba94ee7df3f7c67cf21c7e43f17d62f48c3d8cf"} Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.804773 4810 generic.go:334] "Generic (PLEG): container finished" podID="dda74b1c-bf42-4091-af37-e29e1494b2a6" containerID="15f7cdb396000da6aa75c70c7f168578c16aca6030f9d26712191ffea9eeeca7" exitCode=0 Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.804821 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chcch" event={"ID":"dda74b1c-bf42-4091-af37-e29e1494b2a6","Type":"ContainerDied","Data":"15f7cdb396000da6aa75c70c7f168578c16aca6030f9d26712191ffea9eeeca7"} Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.806875 4810 generic.go:334] "Generic (PLEG): container finished" podID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerID="1019e916358fbd31fbba59c942016069b452b1639acb70474c5cc9fb3be4a248" exitCode=0 Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.806916 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfb8" event={"ID":"78118904-197b-4ed9-bd6e-c02dc35f4e94","Type":"ContainerDied","Data":"1019e916358fbd31fbba59c942016069b452b1639acb70474c5cc9fb3be4a248"} Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.812388 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbmz8" event={"ID":"3fa3125f-8e13-430b-8227-39bd4c3e011b","Type":"ContainerStarted","Data":"ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585"} Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.817108 4810 generic.go:334] "Generic (PLEG): container finished" podID="505c4d65-7e70-43ba-ae57-4660f944e1dc" containerID="2ecb3e5e4ce0bd3552afe61cb74c8457495be877aff47f5b2719b1d2864509cc" exitCode=0 Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.820322 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k49l4" event={"ID":"505c4d65-7e70-43ba-ae57-4660f944e1dc","Type":"ContainerDied","Data":"2ecb3e5e4ce0bd3552afe61cb74c8457495be877aff47f5b2719b1d2864509cc"} Jan 10 06:50:25 crc kubenswrapper[4810]: I0110 06:50:25.868583 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:50:26 crc kubenswrapper[4810]: I0110 06:50:26.824204 4810 generic.go:334] "Generic (PLEG): container finished" podID="3fa3125f-8e13-430b-8227-39bd4c3e011b" containerID="ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585" exitCode=0 Jan 10 06:50:26 crc kubenswrapper[4810]: I0110 06:50:26.824267 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbmz8" event={"ID":"3fa3125f-8e13-430b-8227-39bd4c3e011b","Type":"ContainerDied","Data":"ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585"} Jan 10 06:50:27 crc kubenswrapper[4810]: I0110 06:50:27.836879 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k49l4" event={"ID":"505c4d65-7e70-43ba-ae57-4660f944e1dc","Type":"ContainerStarted","Data":"da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c"} Jan 10 06:50:27 crc kubenswrapper[4810]: I0110 06:50:27.840750 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb5xt" event={"ID":"b4b646bd-b87f-45fc-8142-cef150cda498","Type":"ContainerStarted","Data":"dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99"} Jan 10 06:50:27 crc kubenswrapper[4810]: I0110 06:50:27.877438 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k49l4" podStartSLOduration=3.747993056 podStartE2EDuration="1m44.877420723s" podCreationTimestamp="2026-01-10 06:48:43 +0000 UTC" firstStartedPulling="2026-01-10 06:48:46.172597487 +0000 UTC m=+154.788090380" lastFinishedPulling="2026-01-10 06:50:27.302025134 +0000 UTC m=+255.917518047" observedRunningTime="2026-01-10 06:50:27.873767502 +0000 UTC m=+256.489260405" watchObservedRunningTime="2026-01-10 06:50:27.877420723 +0000 UTC m=+256.492913606" Jan 10 06:50:28 crc kubenswrapper[4810]: I0110 06:50:28.855966 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbmz8" event={"ID":"3fa3125f-8e13-430b-8227-39bd4c3e011b","Type":"ContainerStarted","Data":"0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0"} Jan 10 06:50:28 crc kubenswrapper[4810]: I0110 06:50:28.861819 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chcch" event={"ID":"dda74b1c-bf42-4091-af37-e29e1494b2a6","Type":"ContainerStarted","Data":"230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820"} Jan 10 06:50:28 crc kubenswrapper[4810]: I0110 06:50:28.870826 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xb5xt" podStartSLOduration=4.8582814469999995 podStartE2EDuration="1m45.870811124s" podCreationTimestamp="2026-01-10 06:48:43 +0000 UTC" firstStartedPulling="2026-01-10 06:48:46.166726982 +0000 UTC m=+154.782219875" lastFinishedPulling="2026-01-10 06:50:27.179256649 +0000 UTC m=+255.794749552" observedRunningTime="2026-01-10 06:50:27.910067924 +0000 UTC m=+256.525560807" watchObservedRunningTime="2026-01-10 06:50:28.870811124 +0000 UTC m=+257.486304007" Jan 10 06:50:28 crc kubenswrapper[4810]: I0110 06:50:28.873609 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qbmz8" podStartSLOduration=10.751478757 podStartE2EDuration="1m42.873602642s" podCreationTimestamp="2026-01-10 06:48:46 +0000 UTC" firstStartedPulling="2026-01-10 06:48:55.2927399 +0000 UTC m=+163.908232783" lastFinishedPulling="2026-01-10 06:50:27.414863775 +0000 UTC m=+256.030356668" observedRunningTime="2026-01-10 06:50:28.870380853 +0000 UTC m=+257.485873746" watchObservedRunningTime="2026-01-10 06:50:28.873602642 +0000 UTC m=+257.489095525" Jan 10 06:50:28 crc kubenswrapper[4810]: I0110 06:50:28.897050 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-chcch" podStartSLOduration=4.605507701 podStartE2EDuration="1m45.897028708s" podCreationTimestamp="2026-01-10 06:48:43 +0000 UTC" firstStartedPulling="2026-01-10 06:48:46.172619038 +0000 UTC m=+154.788111971" lastFinishedPulling="2026-01-10 06:50:27.464140085 +0000 UTC m=+256.079632978" observedRunningTime="2026-01-10 06:50:28.894557617 +0000 UTC m=+257.510050510" watchObservedRunningTime="2026-01-10 06:50:28.897028708 +0000 UTC m=+257.512521591" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.009410 4810 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.009903 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerName="extract-content" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.009915 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerName="extract-content" Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.009930 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85307182-5fab-4720-92e7-8479a363d94c" containerName="pruner" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.009936 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="85307182-5fab-4720-92e7-8479a363d94c" containerName="pruner" Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.009946 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fc4503-794c-403e-a6f7-e18be2410845" containerName="extract-content" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.009952 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fc4503-794c-403e-a6f7-e18be2410845" containerName="extract-content" Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.009962 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fc4503-794c-403e-a6f7-e18be2410845" containerName="extract-utilities" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.009969 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fc4503-794c-403e-a6f7-e18be2410845" containerName="extract-utilities" Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.009979 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fc4503-794c-403e-a6f7-e18be2410845" containerName="registry-server" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.009985 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fc4503-794c-403e-a6f7-e18be2410845" containerName="registry-server" Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.009992 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerName="extract-utilities" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.009998 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerName="extract-utilities" Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.010008 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerName="registry-server" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.010015 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerName="registry-server" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.010115 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fc4503-794c-403e-a6f7-e18be2410845" containerName="registry-server" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.010125 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="85307182-5fab-4720-92e7-8479a363d94c" containerName="pruner" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.010130 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="0016a5b8-02cc-4e7f-9eb9-145e3f75e669" containerName="registry-server" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.010443 4810 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.010601 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.010687 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93" gracePeriod=15 Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011041 4810 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011180 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa" gracePeriod=15 Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.011210 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011223 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.011234 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011242 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.011255 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011263 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.011275 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011282 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.011293 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011294 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190" gracePeriod=15 Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011262 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e" gracePeriod=15 Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011301 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011277 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d" gracePeriod=15 Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.011354 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011363 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011475 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011490 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011500 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011511 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.011522 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.016173 4810 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.055949 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.056797 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.056872 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.056926 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.158118 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.158168 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.158214 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.158257 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.158272 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.158291 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.158313 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.158336 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.158334 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.158690 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.158720 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.259883 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.259929 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.259949 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.260000 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.260026 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.260023 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.260039 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.260087 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.260092 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.260067 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.353652 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:50:32 crc kubenswrapper[4810]: W0110 06:50:32.372863 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-a9020c33e7898257ca3477feddddf35fe5431bb7be2bd58dd11f217f3bdcde1f WatchSource:0}: Error finding container a9020c33e7898257ca3477feddddf35fe5431bb7be2bd58dd11f217f3bdcde1f: Status 404 returned error can't find the container with id a9020c33e7898257ca3477feddddf35fe5431bb7be2bd58dd11f217f3bdcde1f Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.375587 4810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18894bf235b5dd1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-10 06:50:32.374910234 +0000 UTC m=+260.990403127,LastTimestamp:2026-01-10 06:50:32.374910234 +0000 UTC m=+260.990403127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 10 06:50:32 crc kubenswrapper[4810]: E0110 06:50:32.411875 4810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18894bf235b5dd1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-10 06:50:32.374910234 +0000 UTC m=+260.990403127,LastTimestamp:2026-01-10 06:50:32.374910234 +0000 UTC m=+260.990403127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.892171 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a9020c33e7898257ca3477feddddf35fe5431bb7be2bd58dd11f217f3bdcde1f"} Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.895985 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfb8" event={"ID":"78118904-197b-4ed9-bd6e-c02dc35f4e94","Type":"ContainerStarted","Data":"eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9"} Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.898996 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 10 06:50:32 crc kubenswrapper[4810]: I0110 06:50:32.899729 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190" exitCode=2 Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.484778 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.485654 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.525563 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.526166 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.526572 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.647866 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.647937 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.688520 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.689048 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.689349 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.689629 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.906237 4810 generic.go:334] "Generic (PLEG): container finished" podID="b679c2df-44f6-4b2d-8204-2db8c95a3085" containerID="daf9ca967c56726febada5d2be42a3631a8c3036893a3cf0fb54ac7b1f406865" exitCode=0 Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.906363 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b679c2df-44f6-4b2d-8204-2db8c95a3085","Type":"ContainerDied","Data":"daf9ca967c56726febada5d2be42a3631a8c3036893a3cf0fb54ac7b1f406865"} Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.907070 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.907441 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.908025 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.909112 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.909884 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.910360 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa" exitCode=0 Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.910380 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e" exitCode=0 Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.910389 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d" exitCode=0 Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.911801 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7669c58d721e7621e3e295ee5f8e290a555dc442a98d5f4f5a3178afc117739f"} Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.913141 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.913357 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.913533 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.913722 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.913915 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.914113 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.914289 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.914441 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.914616 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.914786 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.948377 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.948781 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.948939 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.949183 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.949537 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.949737 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.964000 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.964396 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.964629 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.965046 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.965649 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:33 crc kubenswrapper[4810]: I0110 06:50:33.966014 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.101266 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.101350 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.171174 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.171672 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.171984 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.172218 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.172439 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.172667 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.172895 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: E0110 06:50:34.263752 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-conmon-0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93.scope\": RecentStats: unable to find data in memory cache]" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.380970 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.382008 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.382641 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.383256 4810 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.383824 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.384118 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.384540 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.384982 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.385395 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.486049 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.486119 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.486142 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.486376 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.486429 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.486437 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.587706 4810 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.587764 4810 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.587782 4810 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.919521 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.920329 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93" exitCode=0 Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.920390 4810 scope.go:117] "RemoveContainer" containerID="cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.920413 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.935473 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.935762 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.935905 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.936038 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.936179 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.936338 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.936468 4810 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.937241 4810 scope.go:117] "RemoveContainer" containerID="437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.948977 4810 scope.go:117] "RemoveContainer" containerID="c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.982427 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.982865 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.983243 4810 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.983637 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.983880 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.984167 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.984557 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.984894 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.985235 4810 scope.go:117] "RemoveContainer" containerID="81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190" Jan 10 06:50:34 crc kubenswrapper[4810]: I0110 06:50:34.999470 4810 scope.go:117] "RemoveContainer" containerID="0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.020161 4810 scope.go:117] "RemoveContainer" containerID="d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.047249 4810 scope.go:117] "RemoveContainer" containerID="cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.048363 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\": container with ID starting with cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa not found: ID does not exist" containerID="cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.048404 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa"} err="failed to get container status \"cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\": rpc error: code = NotFound desc = could not find container \"cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa\": container with ID starting with cb8f452f6b4f5f17284fb9937fda3071a7cd2a70714914c632c428c0c545e5fa not found: ID does not exist" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.048431 4810 scope.go:117] "RemoveContainer" containerID="437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.048703 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\": container with ID starting with 437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e not found: ID does not exist" containerID="437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.048739 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e"} err="failed to get container status \"437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\": rpc error: code = NotFound desc = could not find container \"437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e\": container with ID starting with 437f5ffa7b9eacc14b399fe84db111f4e68a09fc3dd523f22998efb41cbb7e3e not found: ID does not exist" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.048767 4810 scope.go:117] "RemoveContainer" containerID="c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.049060 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\": container with ID starting with c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d not found: ID does not exist" containerID="c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.049084 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d"} err="failed to get container status \"c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\": rpc error: code = NotFound desc = could not find container \"c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d\": container with ID starting with c9ea24abd3fa5c670a4399001454a2bd8ae05799d703ce9461b259beed87410d not found: ID does not exist" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.049099 4810 scope.go:117] "RemoveContainer" containerID="81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.049402 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\": container with ID starting with 81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190 not found: ID does not exist" containerID="81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.049453 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190"} err="failed to get container status \"81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\": rpc error: code = NotFound desc = could not find container \"81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190\": container with ID starting with 81b20f23618acd73bdf127ffb1614ba9735d2d0c6113a9aa7fca7a5130ee2190 not found: ID does not exist" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.049497 4810 scope.go:117] "RemoveContainer" containerID="0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.049786 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\": container with ID starting with 0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93 not found: ID does not exist" containerID="0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.049810 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93"} err="failed to get container status \"0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\": rpc error: code = NotFound desc = could not find container \"0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93\": container with ID starting with 0456fefff88ddefcda3d54ba98c26a8b9db9b89c632d714952c913b09c8def93 not found: ID does not exist" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.049824 4810 scope.go:117] "RemoveContainer" containerID="d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.049986 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\": container with ID starting with d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341 not found: ID does not exist" containerID="d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.050002 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341"} err="failed to get container status \"d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\": rpc error: code = NotFound desc = could not find container \"d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341\": container with ID starting with d44ecdf51b2d3d309e3b38e4c09ff56e46a50b2928fcbe68751def06d871d341 not found: ID does not exist" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.146497 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.146879 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.147071 4810 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.147233 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.147393 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.147530 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.147663 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.147800 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.295616 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b679c2df-44f6-4b2d-8204-2db8c95a3085-kube-api-access\") pod \"b679c2df-44f6-4b2d-8204-2db8c95a3085\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.295701 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-kubelet-dir\") pod \"b679c2df-44f6-4b2d-8204-2db8c95a3085\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.295720 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-var-lock\") pod \"b679c2df-44f6-4b2d-8204-2db8c95a3085\" (UID: \"b679c2df-44f6-4b2d-8204-2db8c95a3085\") " Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.295807 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b679c2df-44f6-4b2d-8204-2db8c95a3085" (UID: "b679c2df-44f6-4b2d-8204-2db8c95a3085"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.295840 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-var-lock" (OuterVolumeSpecName: "var-lock") pod "b679c2df-44f6-4b2d-8204-2db8c95a3085" (UID: "b679c2df-44f6-4b2d-8204-2db8c95a3085"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.296001 4810 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.296014 4810 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b679c2df-44f6-4b2d-8204-2db8c95a3085-var-lock\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.300772 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b679c2df-44f6-4b2d-8204-2db8c95a3085-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b679c2df-44f6-4b2d-8204-2db8c95a3085" (UID: "b679c2df-44f6-4b2d-8204-2db8c95a3085"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.397126 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b679c2df-44f6-4b2d-8204-2db8c95a3085-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.501502 4810 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.501801 4810 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.502062 4810 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.502263 4810 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.502413 4810 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.502428 4810 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.502556 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Jan 10 06:50:35 crc kubenswrapper[4810]: E0110 06:50:35.703042 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.706066 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.928798 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.928794 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b679c2df-44f6-4b2d-8204-2db8c95a3085","Type":"ContainerDied","Data":"a5fea4a1bdc155c2e63a8e18794da4d47dd7e6d501223c5776898192f16904f1"} Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.928917 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5fea4a1bdc155c2e63a8e18794da4d47dd7e6d501223c5776898192f16904f1" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.932509 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.932830 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.933119 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.933395 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.933607 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:35 crc kubenswrapper[4810]: I0110 06:50:35.933822 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: E0110 06:50:36.104422 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.675738 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.675810 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.737874 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.739267 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.739732 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.740282 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.740948 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.741396 4810 status_manager.go:851] "Failed to get status for pod" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" pod="openshift-marketplace/redhat-operators-qbmz8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qbmz8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.741900 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.742402 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: E0110 06:50:36.779821 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:50:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:50:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:50:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-10T06:50:36Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:18ced00829d36de1505bd694e0c4b3b8fde9f751e5127775d41f218bbaafe4fb\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:72a5d184bde7c213ef5f96ac627c51263b7649e8b963fde2e6be70fe87da6687\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1204122377},{\\\"names\\\":[],\\\"sizeBytes\\\":1201976068},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:6b3b97e17390b5ee568393f2501a5fc412865074b8f6c5355ea48ab7c3983b7a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:8bb7ea6c489e90cb357c7f50fe8266a6a6c6e23e4931a5eaa0fd33a409db20e8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1175127379},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: E0110 06:50:36.780453 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: E0110 06:50:36.780715 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: E0110 06:50:36.780972 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: E0110 06:50:36.781250 4810 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: E0110 06:50:36.781270 4810 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 10 06:50:36 crc kubenswrapper[4810]: E0110 06:50:36.905702 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.973561 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.973907 4810 status_manager.go:851] "Failed to get status for pod" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" pod="openshift-marketplace/redhat-operators-qbmz8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qbmz8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.974094 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.974257 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.974411 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.974556 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.974692 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:36 crc kubenswrapper[4810]: I0110 06:50:36.974820 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:37 crc kubenswrapper[4810]: I0110 06:50:37.058726 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:50:37 crc kubenswrapper[4810]: I0110 06:50:37.058774 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:50:38 crc kubenswrapper[4810]: I0110 06:50:38.120211 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rtfb8" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerName="registry-server" probeResult="failure" output=< Jan 10 06:50:38 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Jan 10 06:50:38 crc kubenswrapper[4810]: > Jan 10 06:50:38 crc kubenswrapper[4810]: E0110 06:50:38.506042 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="3.2s" Jan 10 06:50:41 crc kubenswrapper[4810]: I0110 06:50:41.695494 4810 status_manager.go:851] "Failed to get status for pod" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" pod="openshift-marketplace/redhat-operators-qbmz8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qbmz8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:41 crc kubenswrapper[4810]: I0110 06:50:41.696349 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:41 crc kubenswrapper[4810]: I0110 06:50:41.696768 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:41 crc kubenswrapper[4810]: I0110 06:50:41.697321 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:41 crc kubenswrapper[4810]: I0110 06:50:41.697820 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:41 crc kubenswrapper[4810]: I0110 06:50:41.698185 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:41 crc kubenswrapper[4810]: I0110 06:50:41.698664 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:41 crc kubenswrapper[4810]: E0110 06:50:41.706962 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="6.4s" Jan 10 06:50:42 crc kubenswrapper[4810]: E0110 06:50:42.413787 4810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18894bf235b5dd1a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-10 06:50:32.374910234 +0000 UTC m=+260.990403127,LastTimestamp:2026-01-10 06:50:32.374910234 +0000 UTC m=+260.990403127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.693521 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.697869 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.698622 4810 status_manager.go:851] "Failed to get status for pod" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" pod="openshift-marketplace/redhat-operators-qbmz8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qbmz8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.699008 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.699453 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.700001 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.700395 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.700823 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.718059 4810 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="85327799-8e01-4084-99a8-f2c26b046940" Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.718108 4810 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="85327799-8e01-4084-99a8-f2c26b046940" Jan 10 06:50:45 crc kubenswrapper[4810]: E0110 06:50:45.718755 4810 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.719691 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:45 crc kubenswrapper[4810]: W0110 06:50:45.752066 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-5543cf8d174b839f923316cea5f4198e0e5b4da34b09c50b4eb0d869f98d7f86 WatchSource:0}: Error finding container 5543cf8d174b839f923316cea5f4198e0e5b4da34b09c50b4eb0d869f98d7f86: Status 404 returned error can't find the container with id 5543cf8d174b839f923316cea5f4198e0e5b4da34b09c50b4eb0d869f98d7f86 Jan 10 06:50:45 crc kubenswrapper[4810]: I0110 06:50:45.989482 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5543cf8d174b839f923316cea5f4198e0e5b4da34b09c50b4eb0d869f98d7f86"} Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.129804 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.131121 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.131880 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.132403 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.132876 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.133421 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.133880 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.134704 4810 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.134788 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.134801 4810 status_manager.go:851] "Failed to get status for pod" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" pod="openshift-marketplace/redhat-operators-qbmz8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qbmz8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.205965 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.206985 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.207616 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.208239 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.209144 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.209659 4810 status_manager.go:851] "Failed to get status for pod" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" pod="openshift-marketplace/redhat-operators-qbmz8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qbmz8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.210069 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:47 crc kubenswrapper[4810]: I0110 06:50:47.210550 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.007549 4810 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="144af825290c2096938f0e84f01682d048c23f630cf154f4bbc0e9f594fbfbd7" exitCode=0 Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.007621 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"144af825290c2096938f0e84f01682d048c23f630cf154f4bbc0e9f594fbfbd7"} Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.008611 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.008987 4810 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="85327799-8e01-4084-99a8-f2c26b046940" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.009045 4810 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="85327799-8e01-4084-99a8-f2c26b046940" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.009055 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.009535 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: E0110 06:50:48.009848 4810 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.009959 4810 status_manager.go:851] "Failed to get status for pod" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" pod="openshift-marketplace/redhat-operators-qbmz8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qbmz8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.010860 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.011482 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.012591 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.012661 4810 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498" exitCode=1 Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.012695 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498"} Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.012926 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.013713 4810 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.013829 4810 scope.go:117] "RemoveContainer" containerID="b93a86bcea8e1f6e7bf26d67b329c5db1c3110fb062f1809c71b358a3aafd498" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.014131 4810 status_manager.go:851] "Failed to get status for pod" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" pod="openshift-marketplace/redhat-operators-rtfb8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-rtfb8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.014592 4810 status_manager.go:851] "Failed to get status for pod" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.014987 4810 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.015317 4810 status_manager.go:851] "Failed to get status for pod" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" pod="openshift-marketplace/redhat-operators-qbmz8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-qbmz8\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.028572 4810 status_manager.go:851] "Failed to get status for pod" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" pod="openshift-marketplace/community-operators-k49l4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-k49l4\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.029138 4810 status_manager.go:851] "Failed to get status for pod" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" pod="openshift-marketplace/certified-operators-chcch" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-chcch\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.029628 4810 status_manager.go:851] "Failed to get status for pod" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" pod="openshift-marketplace/certified-operators-xb5xt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-xb5xt\": dial tcp 38.102.83.9:6443: connect: connection refused" Jan 10 06:50:48 crc kubenswrapper[4810]: E0110 06:50:48.107784 4810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="7s" Jan 10 06:50:48 crc kubenswrapper[4810]: I0110 06:50:48.136573 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:50:49 crc kubenswrapper[4810]: I0110 06:50:49.022547 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e353b289e45fafdd2f259e2d2ac6d6db66687221ad2dfc3e8fcfa6f3b1397604"} Jan 10 06:50:49 crc kubenswrapper[4810]: I0110 06:50:49.022825 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"43a37edd9138d9cd3272a06cdde08fe1c01d0df2f2fecf4e50dfadbf15ab4ed1"} Jan 10 06:50:49 crc kubenswrapper[4810]: I0110 06:50:49.022840 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"be8d551d2e16163d14230a2a7eb6f77227ee30061e2a0e77aad0b9ab6a91a2f7"} Jan 10 06:50:49 crc kubenswrapper[4810]: I0110 06:50:49.026093 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 10 06:50:49 crc kubenswrapper[4810]: I0110 06:50:49.026168 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fa293e42c32409bae4a5319cf3797f859f0cf1180370fbdbee42362b0446054d"} Jan 10 06:50:50 crc kubenswrapper[4810]: I0110 06:50:50.034213 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5875dceb1f5677ca4ac1b022b10c9ce4864ca4d478716c60d60c2ee3eb5e3819"} Jan 10 06:50:50 crc kubenswrapper[4810]: I0110 06:50:50.034469 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f6633d02cf4bc677f0dab9a5d93fb92f80687a6f2c8c847f72f35651db5b01a0"} Jan 10 06:50:50 crc kubenswrapper[4810]: I0110 06:50:50.034591 4810 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="85327799-8e01-4084-99a8-f2c26b046940" Jan 10 06:50:50 crc kubenswrapper[4810]: I0110 06:50:50.034619 4810 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="85327799-8e01-4084-99a8-f2c26b046940" Jan 10 06:50:50 crc kubenswrapper[4810]: I0110 06:50:50.719868 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:50 crc kubenswrapper[4810]: I0110 06:50:50.719916 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:50 crc kubenswrapper[4810]: I0110 06:50:50.725826 4810 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]log ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]etcd ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/generic-apiserver-start-informers ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/priority-and-fairness-filter ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/start-apiextensions-informers ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/start-apiextensions-controllers ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/crd-informer-synced ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/start-system-namespaces-controller ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 10 06:50:50 crc kubenswrapper[4810]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 10 06:50:50 crc kubenswrapper[4810]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/bootstrap-controller ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/start-kube-aggregator-informers ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/apiservice-registration-controller ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/apiservice-discovery-controller ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]autoregister-completion ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/apiservice-openapi-controller ok Jan 10 06:50:50 crc kubenswrapper[4810]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 10 06:50:50 crc kubenswrapper[4810]: livez check failed Jan 10 06:50:50 crc kubenswrapper[4810]: I0110 06:50:50.725918 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 10 06:50:53 crc kubenswrapper[4810]: I0110 06:50:53.955444 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:50:55 crc kubenswrapper[4810]: I0110 06:50:55.053085 4810 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:55 crc kubenswrapper[4810]: I0110 06:50:55.227218 4810 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0b8b2136-7348-49b0-9bbe-865be042588b" Jan 10 06:50:56 crc kubenswrapper[4810]: I0110 06:50:56.073484 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:50:56 crc kubenswrapper[4810]: I0110 06:50:56.073518 4810 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="85327799-8e01-4084-99a8-f2c26b046940" Jan 10 06:50:56 crc kubenswrapper[4810]: I0110 06:50:56.073540 4810 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="85327799-8e01-4084-99a8-f2c26b046940" Jan 10 06:50:56 crc kubenswrapper[4810]: I0110 06:50:56.077818 4810 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0b8b2136-7348-49b0-9bbe-865be042588b" Jan 10 06:50:57 crc kubenswrapper[4810]: I0110 06:50:57.083126 4810 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="85327799-8e01-4084-99a8-f2c26b046940" Jan 10 06:50:57 crc kubenswrapper[4810]: I0110 06:50:57.083164 4810 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="85327799-8e01-4084-99a8-f2c26b046940" Jan 10 06:50:57 crc kubenswrapper[4810]: I0110 06:50:57.087511 4810 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0b8b2136-7348-49b0-9bbe-865be042588b" Jan 10 06:50:58 crc kubenswrapper[4810]: I0110 06:50:58.136174 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:50:58 crc kubenswrapper[4810]: I0110 06:50:58.136478 4810 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 10 06:50:58 crc kubenswrapper[4810]: I0110 06:50:58.136538 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 10 06:51:04 crc kubenswrapper[4810]: I0110 06:51:04.998530 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 10 06:51:05 crc kubenswrapper[4810]: I0110 06:51:05.254096 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 10 06:51:05 crc kubenswrapper[4810]: I0110 06:51:05.352139 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 10 06:51:05 crc kubenswrapper[4810]: I0110 06:51:05.558653 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 10 06:51:05 crc kubenswrapper[4810]: I0110 06:51:05.715364 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 10 06:51:05 crc kubenswrapper[4810]: I0110 06:51:05.925933 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 10 06:51:05 crc kubenswrapper[4810]: I0110 06:51:05.952818 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 10 06:51:06 crc kubenswrapper[4810]: I0110 06:51:06.291004 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 10 06:51:06 crc kubenswrapper[4810]: I0110 06:51:06.526964 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 10 06:51:06 crc kubenswrapper[4810]: I0110 06:51:06.543306 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 10 06:51:06 crc kubenswrapper[4810]: I0110 06:51:06.622105 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 10 06:51:06 crc kubenswrapper[4810]: I0110 06:51:06.667443 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 10 06:51:06 crc kubenswrapper[4810]: I0110 06:51:06.678148 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 10 06:51:06 crc kubenswrapper[4810]: I0110 06:51:06.726366 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 10 06:51:06 crc kubenswrapper[4810]: I0110 06:51:06.756844 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 10 06:51:06 crc kubenswrapper[4810]: I0110 06:51:06.757930 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 10 06:51:06 crc kubenswrapper[4810]: I0110 06:51:06.943681 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 10 06:51:07 crc kubenswrapper[4810]: I0110 06:51:07.029258 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 10 06:51:07 crc kubenswrapper[4810]: I0110 06:51:07.102056 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 10 06:51:07 crc kubenswrapper[4810]: I0110 06:51:07.123454 4810 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 10 06:51:07 crc kubenswrapper[4810]: I0110 06:51:07.140512 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 10 06:51:07 crc kubenswrapper[4810]: I0110 06:51:07.153523 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 10 06:51:07 crc kubenswrapper[4810]: I0110 06:51:07.163634 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 10 06:51:07 crc kubenswrapper[4810]: I0110 06:51:07.181658 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 10 06:51:07 crc kubenswrapper[4810]: I0110 06:51:07.405836 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 10 06:51:07 crc kubenswrapper[4810]: I0110 06:51:07.935742 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 10 06:51:07 crc kubenswrapper[4810]: I0110 06:51:07.986769 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 10 06:51:07 crc kubenswrapper[4810]: I0110 06:51:07.989662 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.066560 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.075725 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.094452 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.143138 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.152684 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.239428 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.293499 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.480267 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.537872 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.578781 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.666917 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.810406 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 10 06:51:08 crc kubenswrapper[4810]: I0110 06:51:08.894374 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.006790 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.104979 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.170448 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.180109 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.189550 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.403314 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.404667 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.425877 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.632182 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.641070 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.670998 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.763088 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.821307 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.867239 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.903352 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 10 06:51:09 crc kubenswrapper[4810]: I0110 06:51:09.938092 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.028530 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.055190 4810 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.068561 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.126490 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.168089 4810 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.249647 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.375405 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.395681 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.581048 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.628587 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.655079 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.662129 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.662865 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.688494 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.727765 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.751906 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.864349 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.865976 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.928632 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 10 06:51:10 crc kubenswrapper[4810]: I0110 06:51:10.943376 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.100647 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.103942 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.122801 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.125840 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.127365 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.140085 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.181641 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.185390 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.256816 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.262804 4810 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.265798 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.323779 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.340569 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.393520 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.452405 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.538405 4810 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.601796 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.620130 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.723671 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.781062 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.803171 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.838650 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.845838 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 10 06:51:11 crc kubenswrapper[4810]: I0110 06:51:11.864769 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.011170 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.024943 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.058989 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.175580 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.237497 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.295725 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.316506 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.439384 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.523789 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.532148 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.605653 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.725770 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.828782 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.849567 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.940551 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 10 06:51:12 crc kubenswrapper[4810]: I0110 06:51:12.972896 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.091700 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.183313 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.205807 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.232509 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.246285 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.271969 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.276315 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.322305 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.330590 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.502063 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.506562 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.595477 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.713840 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.767230 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.794991 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.804595 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.819667 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.866216 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.926959 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.931230 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.963511 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 10 06:51:13 crc kubenswrapper[4810]: I0110 06:51:13.990482 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.057995 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.129078 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.170125 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.185550 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.190800 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.356230 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.364302 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.411399 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.415935 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.423133 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.458484 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.517265 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.555493 4810 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.556117 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rtfb8" podStartSLOduration=54.513566284 podStartE2EDuration="2m28.556083512s" podCreationTimestamp="2026-01-10 06:48:46 +0000 UTC" firstStartedPulling="2026-01-10 06:48:55.29273126 +0000 UTC m=+163.908224143" lastFinishedPulling="2026-01-10 06:50:29.335248498 +0000 UTC m=+257.950741371" observedRunningTime="2026-01-10 06:50:55.06746917 +0000 UTC m=+283.682962053" watchObservedRunningTime="2026-01-10 06:51:14.556083512 +0000 UTC m=+303.171576425" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.565737 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.565711662 podStartE2EDuration="42.565711662s" podCreationTimestamp="2026-01-10 06:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:50:55.088918433 +0000 UTC m=+283.704411356" watchObservedRunningTime="2026-01-10 06:51:14.565711662 +0000 UTC m=+303.181204595" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.566469 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.566535 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.573833 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.588866 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.588839646 podStartE2EDuration="19.588839646s" podCreationTimestamp="2026-01-10 06:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:51:14.586801896 +0000 UTC m=+303.202294779" watchObservedRunningTime="2026-01-10 06:51:14.588839646 +0000 UTC m=+303.204332559" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.728907 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.767446 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.833305 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 10 06:51:14 crc kubenswrapper[4810]: I0110 06:51:14.929309 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.080831 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.098285 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.108776 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.132942 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.151936 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.278754 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.279835 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.371148 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.425379 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.438965 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.467603 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.577685 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.587261 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.649349 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.665879 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.725669 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.748245 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.750629 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.794688 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.827574 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.901208 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.947969 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 10 06:51:15 crc kubenswrapper[4810]: I0110 06:51:15.995042 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.177636 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.217826 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.317922 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.318500 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.359051 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.408984 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.412476 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.438178 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.578411 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.663167 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.675896 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.710922 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.753377 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.767154 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.787898 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.876788 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 10 06:51:16 crc kubenswrapper[4810]: I0110 06:51:16.948105 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.060626 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.060662 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.080934 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.088232 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.116324 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.186036 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.187240 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.204411 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.260658 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.367250 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.379145 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.654643 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.683756 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.761784 4810 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.762021 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://7669c58d721e7621e3e295ee5f8e290a555dc442a98d5f4f5a3178afc117739f" gracePeriod=5 Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.772075 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.852029 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 10 06:51:17 crc kubenswrapper[4810]: I0110 06:51:17.899747 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 10 06:51:18 crc kubenswrapper[4810]: I0110 06:51:18.092709 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 10 06:51:18 crc kubenswrapper[4810]: I0110 06:51:18.145692 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 10 06:51:18 crc kubenswrapper[4810]: I0110 06:51:18.215028 4810 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 10 06:51:18 crc kubenswrapper[4810]: I0110 06:51:18.226620 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 10 06:51:18 crc kubenswrapper[4810]: I0110 06:51:18.285488 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 10 06:51:18 crc kubenswrapper[4810]: I0110 06:51:18.400412 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 10 06:51:18 crc kubenswrapper[4810]: I0110 06:51:18.582953 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 10 06:51:18 crc kubenswrapper[4810]: I0110 06:51:18.592221 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 10 06:51:18 crc kubenswrapper[4810]: I0110 06:51:18.592291 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 10 06:51:18 crc kubenswrapper[4810]: I0110 06:51:18.607165 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 10 06:51:18 crc kubenswrapper[4810]: I0110 06:51:18.820697 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.012499 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.074665 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.129764 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.129806 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.148786 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.268288 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.295180 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.312789 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.313542 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.345427 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.471070 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.599635 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.635125 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.655574 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.750952 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.760349 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.825143 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.920879 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 10 06:51:19 crc kubenswrapper[4810]: I0110 06:51:19.942492 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 10 06:51:20 crc kubenswrapper[4810]: I0110 06:51:20.008464 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 10 06:51:20 crc kubenswrapper[4810]: I0110 06:51:20.518962 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 10 06:51:21 crc kubenswrapper[4810]: I0110 06:51:21.355976 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 10 06:51:21 crc kubenswrapper[4810]: I0110 06:51:21.415089 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.269144 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.269422 4810 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="7669c58d721e7621e3e295ee5f8e290a555dc442a98d5f4f5a3178afc117739f" exitCode=137 Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.374831 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.374893 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.503775 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.503839 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.503856 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.503896 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.503927 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.504740 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.504799 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.504817 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.504839 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.519288 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.605114 4810 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.605155 4810 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.605166 4810 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.605209 4810 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.605225 4810 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.700063 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.700461 4810 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.709424 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.709461 4810 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="91fc66ff-0136-4ab1-a1aa-f8c611a615ea" Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.712826 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 10 06:51:23 crc kubenswrapper[4810]: I0110 06:51:23.712847 4810 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="91fc66ff-0136-4ab1-a1aa-f8c611a615ea" Jan 10 06:51:24 crc kubenswrapper[4810]: I0110 06:51:24.274680 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 10 06:51:24 crc kubenswrapper[4810]: I0110 06:51:24.274740 4810 scope.go:117] "RemoveContainer" containerID="7669c58d721e7621e3e295ee5f8e290a555dc442a98d5f4f5a3178afc117739f" Jan 10 06:51:24 crc kubenswrapper[4810]: I0110 06:51:24.274789 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.303905 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chcch"] Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.304864 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-chcch" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" containerName="registry-server" containerID="cri-o://230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820" gracePeriod=30 Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.313974 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xb5xt"] Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.314205 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xb5xt" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" containerName="registry-server" containerID="cri-o://dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99" gracePeriod=30 Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.321255 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k49l4"] Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.321564 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k49l4" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" containerName="registry-server" containerID="cri-o://da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c" gracePeriod=30 Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.338295 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pfgf2"] Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.338652 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" podUID="755f846b-01e4-435d-b86c-cbe3f917aa31" containerName="marketplace-operator" containerID="cri-o://12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90" gracePeriod=30 Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.348886 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4lxp"] Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.349104 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t4lxp" podUID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" containerName="registry-server" containerID="cri-o://3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033" gracePeriod=30 Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.353906 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qbmz8"] Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.354126 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qbmz8" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" containerName="registry-server" containerID="cri-o://0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0" gracePeriod=30 Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.362541 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtfb8"] Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.362825 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rtfb8" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerName="registry-server" containerID="cri-o://eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9" gracePeriod=30 Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.373640 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mknkg"] Jan 10 06:51:32 crc kubenswrapper[4810]: E0110 06:51:32.373820 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" containerName="installer" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.373830 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" containerName="installer" Jan 10 06:51:32 crc kubenswrapper[4810]: E0110 06:51:32.373842 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.373847 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.373958 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b679c2df-44f6-4b2d-8204-2db8c95a3085" containerName="installer" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.373972 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.374381 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.386428 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mknkg"] Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.523876 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e7dec44-02e8-4784-8062-0fc637ebb92f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mknkg\" (UID: \"9e7dec44-02e8-4784-8062-0fc637ebb92f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.523925 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz7g7\" (UniqueName: \"kubernetes.io/projected/9e7dec44-02e8-4784-8062-0fc637ebb92f-kube-api-access-gz7g7\") pod \"marketplace-operator-79b997595-mknkg\" (UID: \"9e7dec44-02e8-4784-8062-0fc637ebb92f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.523970 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9e7dec44-02e8-4784-8062-0fc637ebb92f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mknkg\" (UID: \"9e7dec44-02e8-4784-8062-0fc637ebb92f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.625882 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e7dec44-02e8-4784-8062-0fc637ebb92f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mknkg\" (UID: \"9e7dec44-02e8-4784-8062-0fc637ebb92f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.625962 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz7g7\" (UniqueName: \"kubernetes.io/projected/9e7dec44-02e8-4784-8062-0fc637ebb92f-kube-api-access-gz7g7\") pod \"marketplace-operator-79b997595-mknkg\" (UID: \"9e7dec44-02e8-4784-8062-0fc637ebb92f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.626039 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9e7dec44-02e8-4784-8062-0fc637ebb92f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mknkg\" (UID: \"9e7dec44-02e8-4784-8062-0fc637ebb92f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.627510 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e7dec44-02e8-4784-8062-0fc637ebb92f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mknkg\" (UID: \"9e7dec44-02e8-4784-8062-0fc637ebb92f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.634960 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9e7dec44-02e8-4784-8062-0fc637ebb92f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mknkg\" (UID: \"9e7dec44-02e8-4784-8062-0fc637ebb92f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.645642 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz7g7\" (UniqueName: \"kubernetes.io/projected/9e7dec44-02e8-4784-8062-0fc637ebb92f-kube-api-access-gz7g7\") pod \"marketplace-operator-79b997595-mknkg\" (UID: \"9e7dec44-02e8-4784-8062-0fc637ebb92f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.761651 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.765510 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.768525 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.772563 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.780186 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.787015 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.806342 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.809543 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930253 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-utilities\") pod \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930301 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-catalog-content\") pod \"b4b646bd-b87f-45fc-8142-cef150cda498\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930324 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cfhc\" (UniqueName: \"kubernetes.io/projected/b4b646bd-b87f-45fc-8142-cef150cda498-kube-api-access-8cfhc\") pod \"b4b646bd-b87f-45fc-8142-cef150cda498\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930344 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-catalog-content\") pod \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930366 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tftkn\" (UniqueName: \"kubernetes.io/projected/755f846b-01e4-435d-b86c-cbe3f917aa31-kube-api-access-tftkn\") pod \"755f846b-01e4-435d-b86c-cbe3f917aa31\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930397 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p24l\" (UniqueName: \"kubernetes.io/projected/6a011bb8-70ef-4c9b-b5cd-642600d792b4-kube-api-access-4p24l\") pod \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\" (UID: \"6a011bb8-70ef-4c9b-b5cd-642600d792b4\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930412 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-utilities\") pod \"3fa3125f-8e13-430b-8227-39bd4c3e011b\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930431 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-trusted-ca\") pod \"755f846b-01e4-435d-b86c-cbe3f917aa31\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930452 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-operator-metrics\") pod \"755f846b-01e4-435d-b86c-cbe3f917aa31\" (UID: \"755f846b-01e4-435d-b86c-cbe3f917aa31\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930483 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clvl9\" (UniqueName: \"kubernetes.io/projected/78118904-197b-4ed9-bd6e-c02dc35f4e94-kube-api-access-clvl9\") pod \"78118904-197b-4ed9-bd6e-c02dc35f4e94\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930512 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-utilities\") pod \"505c4d65-7e70-43ba-ae57-4660f944e1dc\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930530 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-utilities\") pod \"78118904-197b-4ed9-bd6e-c02dc35f4e94\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930567 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x682b\" (UniqueName: \"kubernetes.io/projected/dda74b1c-bf42-4091-af37-e29e1494b2a6-kube-api-access-x682b\") pod \"dda74b1c-bf42-4091-af37-e29e1494b2a6\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930584 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knfbm\" (UniqueName: \"kubernetes.io/projected/3fa3125f-8e13-430b-8227-39bd4c3e011b-kube-api-access-knfbm\") pod \"3fa3125f-8e13-430b-8227-39bd4c3e011b\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930607 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-catalog-content\") pod \"3fa3125f-8e13-430b-8227-39bd4c3e011b\" (UID: \"3fa3125f-8e13-430b-8227-39bd4c3e011b\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930650 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-catalog-content\") pod \"78118904-197b-4ed9-bd6e-c02dc35f4e94\" (UID: \"78118904-197b-4ed9-bd6e-c02dc35f4e94\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930667 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-catalog-content\") pod \"dda74b1c-bf42-4091-af37-e29e1494b2a6\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930689 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-catalog-content\") pod \"505c4d65-7e70-43ba-ae57-4660f944e1dc\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930708 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-utilities\") pod \"dda74b1c-bf42-4091-af37-e29e1494b2a6\" (UID: \"dda74b1c-bf42-4091-af37-e29e1494b2a6\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930730 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6d69\" (UniqueName: \"kubernetes.io/projected/505c4d65-7e70-43ba-ae57-4660f944e1dc-kube-api-access-r6d69\") pod \"505c4d65-7e70-43ba-ae57-4660f944e1dc\" (UID: \"505c4d65-7e70-43ba-ae57-4660f944e1dc\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.930756 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-utilities\") pod \"b4b646bd-b87f-45fc-8142-cef150cda498\" (UID: \"b4b646bd-b87f-45fc-8142-cef150cda498\") " Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.931458 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-utilities" (OuterVolumeSpecName: "utilities") pod "6a011bb8-70ef-4c9b-b5cd-642600d792b4" (UID: "6a011bb8-70ef-4c9b-b5cd-642600d792b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.932653 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-utilities" (OuterVolumeSpecName: "utilities") pod "dda74b1c-bf42-4091-af37-e29e1494b2a6" (UID: "dda74b1c-bf42-4091-af37-e29e1494b2a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.936297 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda74b1c-bf42-4091-af37-e29e1494b2a6-kube-api-access-x682b" (OuterVolumeSpecName: "kube-api-access-x682b") pod "dda74b1c-bf42-4091-af37-e29e1494b2a6" (UID: "dda74b1c-bf42-4091-af37-e29e1494b2a6"). InnerVolumeSpecName "kube-api-access-x682b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.936425 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b646bd-b87f-45fc-8142-cef150cda498-kube-api-access-8cfhc" (OuterVolumeSpecName: "kube-api-access-8cfhc") pod "b4b646bd-b87f-45fc-8142-cef150cda498" (UID: "b4b646bd-b87f-45fc-8142-cef150cda498"). InnerVolumeSpecName "kube-api-access-8cfhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.936606 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-utilities" (OuterVolumeSpecName: "utilities") pod "505c4d65-7e70-43ba-ae57-4660f944e1dc" (UID: "505c4d65-7e70-43ba-ae57-4660f944e1dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.936732 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa3125f-8e13-430b-8227-39bd4c3e011b-kube-api-access-knfbm" (OuterVolumeSpecName: "kube-api-access-knfbm") pod "3fa3125f-8e13-430b-8227-39bd4c3e011b" (UID: "3fa3125f-8e13-430b-8227-39bd4c3e011b"). InnerVolumeSpecName "kube-api-access-knfbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.936830 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755f846b-01e4-435d-b86c-cbe3f917aa31-kube-api-access-tftkn" (OuterVolumeSpecName: "kube-api-access-tftkn") pod "755f846b-01e4-435d-b86c-cbe3f917aa31" (UID: "755f846b-01e4-435d-b86c-cbe3f917aa31"). InnerVolumeSpecName "kube-api-access-tftkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.937067 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-utilities" (OuterVolumeSpecName: "utilities") pod "3fa3125f-8e13-430b-8227-39bd4c3e011b" (UID: "3fa3125f-8e13-430b-8227-39bd4c3e011b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.937532 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "755f846b-01e4-435d-b86c-cbe3f917aa31" (UID: "755f846b-01e4-435d-b86c-cbe3f917aa31"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.938003 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-utilities" (OuterVolumeSpecName: "utilities") pod "78118904-197b-4ed9-bd6e-c02dc35f4e94" (UID: "78118904-197b-4ed9-bd6e-c02dc35f4e94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.940262 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505c4d65-7e70-43ba-ae57-4660f944e1dc-kube-api-access-r6d69" (OuterVolumeSpecName: "kube-api-access-r6d69") pod "505c4d65-7e70-43ba-ae57-4660f944e1dc" (UID: "505c4d65-7e70-43ba-ae57-4660f944e1dc"). InnerVolumeSpecName "kube-api-access-r6d69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.940442 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-utilities" (OuterVolumeSpecName: "utilities") pod "b4b646bd-b87f-45fc-8142-cef150cda498" (UID: "b4b646bd-b87f-45fc-8142-cef150cda498"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.941350 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78118904-197b-4ed9-bd6e-c02dc35f4e94-kube-api-access-clvl9" (OuterVolumeSpecName: "kube-api-access-clvl9") pod "78118904-197b-4ed9-bd6e-c02dc35f4e94" (UID: "78118904-197b-4ed9-bd6e-c02dc35f4e94"). InnerVolumeSpecName "kube-api-access-clvl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.945526 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "755f846b-01e4-435d-b86c-cbe3f917aa31" (UID: "755f846b-01e4-435d-b86c-cbe3f917aa31"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.954570 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a011bb8-70ef-4c9b-b5cd-642600d792b4-kube-api-access-4p24l" (OuterVolumeSpecName: "kube-api-access-4p24l") pod "6a011bb8-70ef-4c9b-b5cd-642600d792b4" (UID: "6a011bb8-70ef-4c9b-b5cd-642600d792b4"). InnerVolumeSpecName "kube-api-access-4p24l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.969517 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a011bb8-70ef-4c9b-b5cd-642600d792b4" (UID: "6a011bb8-70ef-4c9b-b5cd-642600d792b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:32 crc kubenswrapper[4810]: I0110 06:51:32.997813 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4b646bd-b87f-45fc-8142-cef150cda498" (UID: "b4b646bd-b87f-45fc-8142-cef150cda498"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.002360 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dda74b1c-bf42-4091-af37-e29e1494b2a6" (UID: "dda74b1c-bf42-4091-af37-e29e1494b2a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.030151 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "505c4d65-7e70-43ba-ae57-4660f944e1dc" (UID: "505c4d65-7e70-43ba-ae57-4660f944e1dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031578 4810 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031605 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clvl9\" (UniqueName: \"kubernetes.io/projected/78118904-197b-4ed9-bd6e-c02dc35f4e94-kube-api-access-clvl9\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031616 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031628 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031636 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x682b\" (UniqueName: \"kubernetes.io/projected/dda74b1c-bf42-4091-af37-e29e1494b2a6-kube-api-access-x682b\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031647 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knfbm\" (UniqueName: \"kubernetes.io/projected/3fa3125f-8e13-430b-8227-39bd4c3e011b-kube-api-access-knfbm\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031656 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031665 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505c4d65-7e70-43ba-ae57-4660f944e1dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031673 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dda74b1c-bf42-4091-af37-e29e1494b2a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031681 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6d69\" (UniqueName: \"kubernetes.io/projected/505c4d65-7e70-43ba-ae57-4660f944e1dc-kube-api-access-r6d69\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031688 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031696 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031704 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4b646bd-b87f-45fc-8142-cef150cda498-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031711 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cfhc\" (UniqueName: \"kubernetes.io/projected/b4b646bd-b87f-45fc-8142-cef150cda498-kube-api-access-8cfhc\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031719 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a011bb8-70ef-4c9b-b5cd-642600d792b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031728 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tftkn\" (UniqueName: \"kubernetes.io/projected/755f846b-01e4-435d-b86c-cbe3f917aa31-kube-api-access-tftkn\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031736 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p24l\" (UniqueName: \"kubernetes.io/projected/6a011bb8-70ef-4c9b-b5cd-642600d792b4-kube-api-access-4p24l\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031745 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.031753 4810 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/755f846b-01e4-435d-b86c-cbe3f917aa31-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.080801 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3fa3125f-8e13-430b-8227-39bd4c3e011b" (UID: "3fa3125f-8e13-430b-8227-39bd4c3e011b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.081158 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78118904-197b-4ed9-bd6e-c02dc35f4e94" (UID: "78118904-197b-4ed9-bd6e-c02dc35f4e94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.133125 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3fa3125f-8e13-430b-8227-39bd4c3e011b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.133163 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78118904-197b-4ed9-bd6e-c02dc35f4e94-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.230807 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mknkg"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.333472 4810 generic.go:334] "Generic (PLEG): container finished" podID="3fa3125f-8e13-430b-8227-39bd4c3e011b" containerID="0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0" exitCode=0 Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.333553 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qbmz8" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.333608 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbmz8" event={"ID":"3fa3125f-8e13-430b-8227-39bd4c3e011b","Type":"ContainerDied","Data":"0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.334262 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qbmz8" event={"ID":"3fa3125f-8e13-430b-8227-39bd4c3e011b","Type":"ContainerDied","Data":"abf4f9f67f60b6bd29899071791c1559930d9118804f87544a4761f661241ca7"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.334306 4810 scope.go:117] "RemoveContainer" containerID="0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.335672 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" event={"ID":"9e7dec44-02e8-4784-8062-0fc637ebb92f","Type":"ContainerStarted","Data":"82d30eed5d3f70678d81b57d0aff96c83e76401269b8cc3eacf5232488655547"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.339167 4810 generic.go:334] "Generic (PLEG): container finished" podID="b4b646bd-b87f-45fc-8142-cef150cda498" containerID="dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99" exitCode=0 Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.339277 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb5xt" event={"ID":"b4b646bd-b87f-45fc-8142-cef150cda498","Type":"ContainerDied","Data":"dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.339320 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xb5xt" event={"ID":"b4b646bd-b87f-45fc-8142-cef150cda498","Type":"ContainerDied","Data":"ef226f551b831fb9a94797bd4f5d304b49bd6c2d46130a5c0f69083f1bfe260e"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.339394 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xb5xt" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.341318 4810 generic.go:334] "Generic (PLEG): container finished" podID="505c4d65-7e70-43ba-ae57-4660f944e1dc" containerID="da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c" exitCode=0 Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.341372 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k49l4" event={"ID":"505c4d65-7e70-43ba-ae57-4660f944e1dc","Type":"ContainerDied","Data":"da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.341396 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k49l4" event={"ID":"505c4d65-7e70-43ba-ae57-4660f944e1dc","Type":"ContainerDied","Data":"59fc5c41c39ccbe4da8b7e4f327b520d7219d6394f43193ca3894f3a0b87b994"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.341457 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k49l4" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.352109 4810 generic.go:334] "Generic (PLEG): container finished" podID="dda74b1c-bf42-4091-af37-e29e1494b2a6" containerID="230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820" exitCode=0 Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.352167 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chcch" event={"ID":"dda74b1c-bf42-4091-af37-e29e1494b2a6","Type":"ContainerDied","Data":"230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.352213 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chcch" event={"ID":"dda74b1c-bf42-4091-af37-e29e1494b2a6","Type":"ContainerDied","Data":"3ed9b7f5ffcb0aa09396ec532e92cd9b309b05489f18ad385596a9b58d16c797"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.352272 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chcch" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.356612 4810 generic.go:334] "Generic (PLEG): container finished" podID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" containerID="3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033" exitCode=0 Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.356675 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4lxp" event={"ID":"6a011bb8-70ef-4c9b-b5cd-642600d792b4","Type":"ContainerDied","Data":"3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.356706 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t4lxp" event={"ID":"6a011bb8-70ef-4c9b-b5cd-642600d792b4","Type":"ContainerDied","Data":"dce29c8470a8ae72b4ce9b77cfb85ac20f2f27177871407826165cddf71e2074"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.356703 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t4lxp" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.360013 4810 generic.go:334] "Generic (PLEG): container finished" podID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerID="eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9" exitCode=0 Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.360155 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtfb8" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.360788 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfb8" event={"ID":"78118904-197b-4ed9-bd6e-c02dc35f4e94","Type":"ContainerDied","Data":"eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.360824 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtfb8" event={"ID":"78118904-197b-4ed9-bd6e-c02dc35f4e94","Type":"ContainerDied","Data":"be2f681d8293407fa187e5d162da0551d263cf572d236de42ce71c7deb762e2d"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.365964 4810 scope.go:117] "RemoveContainer" containerID="ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.366590 4810 generic.go:334] "Generic (PLEG): container finished" podID="755f846b-01e4-435d-b86c-cbe3f917aa31" containerID="12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90" exitCode=0 Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.366651 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" event={"ID":"755f846b-01e4-435d-b86c-cbe3f917aa31","Type":"ContainerDied","Data":"12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.366678 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" event={"ID":"755f846b-01e4-435d-b86c-cbe3f917aa31","Type":"ContainerDied","Data":"38a2de2a402d77923f0ab26011b1cc22f2eeb11c56e3b9f4ca1495dfd1da79df"} Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.366722 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pfgf2" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.373058 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qbmz8"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.384795 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qbmz8"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.426080 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xb5xt"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.432019 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xb5xt"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.438673 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pfgf2"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.444573 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pfgf2"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.447588 4810 scope.go:117] "RemoveContainer" containerID="b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.453722 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k49l4"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.459853 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k49l4"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.468463 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chcch"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.475233 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-chcch"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.484589 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtfb8"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.489101 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rtfb8"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.494338 4810 scope.go:117] "RemoveContainer" containerID="0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.501606 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0\": container with ID starting with 0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0 not found: ID does not exist" containerID="0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.501644 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0"} err="failed to get container status \"0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0\": rpc error: code = NotFound desc = could not find container \"0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0\": container with ID starting with 0bbae806b2ce0854c07030f593fab50ebcf3da6603b16642f4fc2d850bd2d8e0 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.501669 4810 scope.go:117] "RemoveContainer" containerID="ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.501923 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585\": container with ID starting with ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585 not found: ID does not exist" containerID="ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.501941 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585"} err="failed to get container status \"ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585\": rpc error: code = NotFound desc = could not find container \"ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585\": container with ID starting with ac9e9dd26e6872f62024886023b3f41c5325f3e43738adb0e477ad041f388585 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.501953 4810 scope.go:117] "RemoveContainer" containerID="b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.502124 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6\": container with ID starting with b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6 not found: ID does not exist" containerID="b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.502144 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6"} err="failed to get container status \"b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6\": rpc error: code = NotFound desc = could not find container \"b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6\": container with ID starting with b97cad88183dc333ce8854ba712408f447c8eadc2898403babe5eaf0a52ff4c6 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.502156 4810 scope.go:117] "RemoveContainer" containerID="dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.502237 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4lxp"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.521563 4810 scope.go:117] "RemoveContainer" containerID="d6a705968d516fa0198743fa7ba94ee7df3f7c67cf21c7e43f17d62f48c3d8cf" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.523435 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t4lxp"] Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.546695 4810 scope.go:117] "RemoveContainer" containerID="988f8780afbff9d574d3c686bfc8909def34af50741e7c3d10598b7dba440d37" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.567013 4810 scope.go:117] "RemoveContainer" containerID="dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.567348 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99\": container with ID starting with dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99 not found: ID does not exist" containerID="dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.567384 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99"} err="failed to get container status \"dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99\": rpc error: code = NotFound desc = could not find container \"dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99\": container with ID starting with dffa5a0c4ad606afa0898f3d43b56f8e0fa4195bba92b3071e6a181659b3ff99 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.567408 4810 scope.go:117] "RemoveContainer" containerID="d6a705968d516fa0198743fa7ba94ee7df3f7c67cf21c7e43f17d62f48c3d8cf" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.567652 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a705968d516fa0198743fa7ba94ee7df3f7c67cf21c7e43f17d62f48c3d8cf\": container with ID starting with d6a705968d516fa0198743fa7ba94ee7df3f7c67cf21c7e43f17d62f48c3d8cf not found: ID does not exist" containerID="d6a705968d516fa0198743fa7ba94ee7df3f7c67cf21c7e43f17d62f48c3d8cf" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.567691 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a705968d516fa0198743fa7ba94ee7df3f7c67cf21c7e43f17d62f48c3d8cf"} err="failed to get container status \"d6a705968d516fa0198743fa7ba94ee7df3f7c67cf21c7e43f17d62f48c3d8cf\": rpc error: code = NotFound desc = could not find container \"d6a705968d516fa0198743fa7ba94ee7df3f7c67cf21c7e43f17d62f48c3d8cf\": container with ID starting with d6a705968d516fa0198743fa7ba94ee7df3f7c67cf21c7e43f17d62f48c3d8cf not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.567720 4810 scope.go:117] "RemoveContainer" containerID="988f8780afbff9d574d3c686bfc8909def34af50741e7c3d10598b7dba440d37" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.568033 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988f8780afbff9d574d3c686bfc8909def34af50741e7c3d10598b7dba440d37\": container with ID starting with 988f8780afbff9d574d3c686bfc8909def34af50741e7c3d10598b7dba440d37 not found: ID does not exist" containerID="988f8780afbff9d574d3c686bfc8909def34af50741e7c3d10598b7dba440d37" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.568090 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988f8780afbff9d574d3c686bfc8909def34af50741e7c3d10598b7dba440d37"} err="failed to get container status \"988f8780afbff9d574d3c686bfc8909def34af50741e7c3d10598b7dba440d37\": rpc error: code = NotFound desc = could not find container \"988f8780afbff9d574d3c686bfc8909def34af50741e7c3d10598b7dba440d37\": container with ID starting with 988f8780afbff9d574d3c686bfc8909def34af50741e7c3d10598b7dba440d37 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.568120 4810 scope.go:117] "RemoveContainer" containerID="da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.578256 4810 scope.go:117] "RemoveContainer" containerID="2ecb3e5e4ce0bd3552afe61cb74c8457495be877aff47f5b2719b1d2864509cc" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.590146 4810 scope.go:117] "RemoveContainer" containerID="856e698e1b0a7125fffb0183aeefb22bb1ed3a88264fb678d1e91a96b36bafdf" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.603219 4810 scope.go:117] "RemoveContainer" containerID="da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.603676 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c\": container with ID starting with da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c not found: ID does not exist" containerID="da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.603706 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c"} err="failed to get container status \"da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c\": rpc error: code = NotFound desc = could not find container \"da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c\": container with ID starting with da02d96ce574b863d2e6c3cddc4c481431e7f9d4e365b9dea4697f9479cfdd0c not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.603902 4810 scope.go:117] "RemoveContainer" containerID="2ecb3e5e4ce0bd3552afe61cb74c8457495be877aff47f5b2719b1d2864509cc" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.604983 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ecb3e5e4ce0bd3552afe61cb74c8457495be877aff47f5b2719b1d2864509cc\": container with ID starting with 2ecb3e5e4ce0bd3552afe61cb74c8457495be877aff47f5b2719b1d2864509cc not found: ID does not exist" containerID="2ecb3e5e4ce0bd3552afe61cb74c8457495be877aff47f5b2719b1d2864509cc" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.605006 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ecb3e5e4ce0bd3552afe61cb74c8457495be877aff47f5b2719b1d2864509cc"} err="failed to get container status \"2ecb3e5e4ce0bd3552afe61cb74c8457495be877aff47f5b2719b1d2864509cc\": rpc error: code = NotFound desc = could not find container \"2ecb3e5e4ce0bd3552afe61cb74c8457495be877aff47f5b2719b1d2864509cc\": container with ID starting with 2ecb3e5e4ce0bd3552afe61cb74c8457495be877aff47f5b2719b1d2864509cc not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.605029 4810 scope.go:117] "RemoveContainer" containerID="856e698e1b0a7125fffb0183aeefb22bb1ed3a88264fb678d1e91a96b36bafdf" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.605270 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"856e698e1b0a7125fffb0183aeefb22bb1ed3a88264fb678d1e91a96b36bafdf\": container with ID starting with 856e698e1b0a7125fffb0183aeefb22bb1ed3a88264fb678d1e91a96b36bafdf not found: ID does not exist" containerID="856e698e1b0a7125fffb0183aeefb22bb1ed3a88264fb678d1e91a96b36bafdf" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.605295 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"856e698e1b0a7125fffb0183aeefb22bb1ed3a88264fb678d1e91a96b36bafdf"} err="failed to get container status \"856e698e1b0a7125fffb0183aeefb22bb1ed3a88264fb678d1e91a96b36bafdf\": rpc error: code = NotFound desc = could not find container \"856e698e1b0a7125fffb0183aeefb22bb1ed3a88264fb678d1e91a96b36bafdf\": container with ID starting with 856e698e1b0a7125fffb0183aeefb22bb1ed3a88264fb678d1e91a96b36bafdf not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.605310 4810 scope.go:117] "RemoveContainer" containerID="230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.616670 4810 scope.go:117] "RemoveContainer" containerID="15f7cdb396000da6aa75c70c7f168578c16aca6030f9d26712191ffea9eeeca7" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.635030 4810 scope.go:117] "RemoveContainer" containerID="528bd4c6c541efe1e3b5e84f35a965deaa3a9f7fa7ba8e58864b31b6599c3299" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.649142 4810 scope.go:117] "RemoveContainer" containerID="230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.649519 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820\": container with ID starting with 230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820 not found: ID does not exist" containerID="230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.649551 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820"} err="failed to get container status \"230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820\": rpc error: code = NotFound desc = could not find container \"230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820\": container with ID starting with 230facb7aae917b908b94b2599451d6397c59ef32c0c24e7df971a49f8504820 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.649580 4810 scope.go:117] "RemoveContainer" containerID="15f7cdb396000da6aa75c70c7f168578c16aca6030f9d26712191ffea9eeeca7" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.649984 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f7cdb396000da6aa75c70c7f168578c16aca6030f9d26712191ffea9eeeca7\": container with ID starting with 15f7cdb396000da6aa75c70c7f168578c16aca6030f9d26712191ffea9eeeca7 not found: ID does not exist" containerID="15f7cdb396000da6aa75c70c7f168578c16aca6030f9d26712191ffea9eeeca7" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.650009 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f7cdb396000da6aa75c70c7f168578c16aca6030f9d26712191ffea9eeeca7"} err="failed to get container status \"15f7cdb396000da6aa75c70c7f168578c16aca6030f9d26712191ffea9eeeca7\": rpc error: code = NotFound desc = could not find container \"15f7cdb396000da6aa75c70c7f168578c16aca6030f9d26712191ffea9eeeca7\": container with ID starting with 15f7cdb396000da6aa75c70c7f168578c16aca6030f9d26712191ffea9eeeca7 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.650026 4810 scope.go:117] "RemoveContainer" containerID="528bd4c6c541efe1e3b5e84f35a965deaa3a9f7fa7ba8e58864b31b6599c3299" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.650352 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"528bd4c6c541efe1e3b5e84f35a965deaa3a9f7fa7ba8e58864b31b6599c3299\": container with ID starting with 528bd4c6c541efe1e3b5e84f35a965deaa3a9f7fa7ba8e58864b31b6599c3299 not found: ID does not exist" containerID="528bd4c6c541efe1e3b5e84f35a965deaa3a9f7fa7ba8e58864b31b6599c3299" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.650374 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"528bd4c6c541efe1e3b5e84f35a965deaa3a9f7fa7ba8e58864b31b6599c3299"} err="failed to get container status \"528bd4c6c541efe1e3b5e84f35a965deaa3a9f7fa7ba8e58864b31b6599c3299\": rpc error: code = NotFound desc = could not find container \"528bd4c6c541efe1e3b5e84f35a965deaa3a9f7fa7ba8e58864b31b6599c3299\": container with ID starting with 528bd4c6c541efe1e3b5e84f35a965deaa3a9f7fa7ba8e58864b31b6599c3299 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.650394 4810 scope.go:117] "RemoveContainer" containerID="3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.667645 4810 scope.go:117] "RemoveContainer" containerID="ec5f16940b0bfe6cb69769c862934412705d929acc0a330b222782c698f13814" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.680493 4810 scope.go:117] "RemoveContainer" containerID="35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.699389 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" path="/var/lib/kubelet/pods/3fa3125f-8e13-430b-8227-39bd4c3e011b/volumes" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.700129 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" path="/var/lib/kubelet/pods/505c4d65-7e70-43ba-ae57-4660f944e1dc/volumes" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.700852 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" path="/var/lib/kubelet/pods/6a011bb8-70ef-4c9b-b5cd-642600d792b4/volumes" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.702143 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="755f846b-01e4-435d-b86c-cbe3f917aa31" path="/var/lib/kubelet/pods/755f846b-01e4-435d-b86c-cbe3f917aa31/volumes" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.702890 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" path="/var/lib/kubelet/pods/78118904-197b-4ed9-bd6e-c02dc35f4e94/volumes" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.704090 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" path="/var/lib/kubelet/pods/b4b646bd-b87f-45fc-8142-cef150cda498/volumes" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.704648 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" path="/var/lib/kubelet/pods/dda74b1c-bf42-4091-af37-e29e1494b2a6/volumes" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.705759 4810 scope.go:117] "RemoveContainer" containerID="3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.706100 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033\": container with ID starting with 3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033 not found: ID does not exist" containerID="3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.706126 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033"} err="failed to get container status \"3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033\": rpc error: code = NotFound desc = could not find container \"3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033\": container with ID starting with 3192feb78240b738d50db713e10ed0f15cdae09e0a7b1c0662152edbe779f033 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.706147 4810 scope.go:117] "RemoveContainer" containerID="ec5f16940b0bfe6cb69769c862934412705d929acc0a330b222782c698f13814" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.706377 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5f16940b0bfe6cb69769c862934412705d929acc0a330b222782c698f13814\": container with ID starting with ec5f16940b0bfe6cb69769c862934412705d929acc0a330b222782c698f13814 not found: ID does not exist" containerID="ec5f16940b0bfe6cb69769c862934412705d929acc0a330b222782c698f13814" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.706399 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5f16940b0bfe6cb69769c862934412705d929acc0a330b222782c698f13814"} err="failed to get container status \"ec5f16940b0bfe6cb69769c862934412705d929acc0a330b222782c698f13814\": rpc error: code = NotFound desc = could not find container \"ec5f16940b0bfe6cb69769c862934412705d929acc0a330b222782c698f13814\": container with ID starting with ec5f16940b0bfe6cb69769c862934412705d929acc0a330b222782c698f13814 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.706413 4810 scope.go:117] "RemoveContainer" containerID="35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.706859 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29\": container with ID starting with 35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29 not found: ID does not exist" containerID="35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.706922 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29"} err="failed to get container status \"35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29\": rpc error: code = NotFound desc = could not find container \"35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29\": container with ID starting with 35628916d7d1d729f6e0abb4791b67cc8ac2666aa0a7fcf976e1c9cdf5862c29 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.706963 4810 scope.go:117] "RemoveContainer" containerID="eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.719265 4810 scope.go:117] "RemoveContainer" containerID="1019e916358fbd31fbba59c942016069b452b1639acb70474c5cc9fb3be4a248" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.737535 4810 scope.go:117] "RemoveContainer" containerID="e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.749085 4810 scope.go:117] "RemoveContainer" containerID="eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.749370 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9\": container with ID starting with eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9 not found: ID does not exist" containerID="eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.749395 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9"} err="failed to get container status \"eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9\": rpc error: code = NotFound desc = could not find container \"eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9\": container with ID starting with eb3600350a8e93274d7d83358a8be8c4a036fe4f08d008a67f5915c6ad4da6d9 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.749418 4810 scope.go:117] "RemoveContainer" containerID="1019e916358fbd31fbba59c942016069b452b1639acb70474c5cc9fb3be4a248" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.749616 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1019e916358fbd31fbba59c942016069b452b1639acb70474c5cc9fb3be4a248\": container with ID starting with 1019e916358fbd31fbba59c942016069b452b1639acb70474c5cc9fb3be4a248 not found: ID does not exist" containerID="1019e916358fbd31fbba59c942016069b452b1639acb70474c5cc9fb3be4a248" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.749633 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1019e916358fbd31fbba59c942016069b452b1639acb70474c5cc9fb3be4a248"} err="failed to get container status \"1019e916358fbd31fbba59c942016069b452b1639acb70474c5cc9fb3be4a248\": rpc error: code = NotFound desc = could not find container \"1019e916358fbd31fbba59c942016069b452b1639acb70474c5cc9fb3be4a248\": container with ID starting with 1019e916358fbd31fbba59c942016069b452b1639acb70474c5cc9fb3be4a248 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.749644 4810 scope.go:117] "RemoveContainer" containerID="e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.749869 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022\": container with ID starting with e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022 not found: ID does not exist" containerID="e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.749890 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022"} err="failed to get container status \"e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022\": rpc error: code = NotFound desc = could not find container \"e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022\": container with ID starting with e563a2da27777f0410b6f8ffe10315b1b3e267c6d648ad94fcd5264ea06a8022 not found: ID does not exist" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.749904 4810 scope.go:117] "RemoveContainer" containerID="12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.770606 4810 scope.go:117] "RemoveContainer" containerID="12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90" Jan 10 06:51:33 crc kubenswrapper[4810]: E0110 06:51:33.770996 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90\": container with ID starting with 12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90 not found: ID does not exist" containerID="12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90" Jan 10 06:51:33 crc kubenswrapper[4810]: I0110 06:51:33.771022 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90"} err="failed to get container status \"12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90\": rpc error: code = NotFound desc = could not find container \"12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90\": container with ID starting with 12eadeb9feccdef2fc44f15b7dcc115f1a4f403b53fd4722ad7d8b409e723f90 not found: ID does not exist" Jan 10 06:51:34 crc kubenswrapper[4810]: I0110 06:51:34.377944 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" event={"ID":"9e7dec44-02e8-4784-8062-0fc637ebb92f","Type":"ContainerStarted","Data":"5d59763c2279f33e3da025c83cf27fabc0a876120f11a8fa3f4f7192d0b4e5f4"} Jan 10 06:51:34 crc kubenswrapper[4810]: I0110 06:51:34.378333 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:34 crc kubenswrapper[4810]: I0110 06:51:34.381500 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" Jan 10 06:51:34 crc kubenswrapper[4810]: I0110 06:51:34.394646 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mknkg" podStartSLOduration=2.394631585 podStartE2EDuration="2.394631585s" podCreationTimestamp="2026-01-10 06:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:51:34.39357572 +0000 UTC m=+323.009068603" watchObservedRunningTime="2026-01-10 06:51:34.394631585 +0000 UTC m=+323.010124478" Jan 10 06:51:35 crc kubenswrapper[4810]: I0110 06:51:35.409761 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 10 06:51:39 crc kubenswrapper[4810]: I0110 06:51:39.484228 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 10 06:51:39 crc kubenswrapper[4810]: I0110 06:51:39.919569 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 10 06:51:40 crc kubenswrapper[4810]: I0110 06:51:40.293715 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 10 06:51:43 crc kubenswrapper[4810]: I0110 06:51:43.967736 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwwrl"] Jan 10 06:51:43 crc kubenswrapper[4810]: I0110 06:51:43.968487 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" podUID="7c7ec181-e310-4832-af7a-d6a5437e565d" containerName="controller-manager" containerID="cri-o://9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d" gracePeriod=30 Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.074988 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g"] Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.075344 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" podUID="39b51457-06ae-4cf2-8b78-4f34bb908819" containerName="route-controller-manager" containerID="cri-o://4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17" gracePeriod=30 Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.327596 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.394545 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.399405 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-client-ca\") pod \"7c7ec181-e310-4832-af7a-d6a5437e565d\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.399460 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7ec181-e310-4832-af7a-d6a5437e565d-serving-cert\") pod \"7c7ec181-e310-4832-af7a-d6a5437e565d\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.399501 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-config\") pod \"7c7ec181-e310-4832-af7a-d6a5437e565d\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.399539 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-proxy-ca-bundles\") pod \"7c7ec181-e310-4832-af7a-d6a5437e565d\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.399576 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-config\") pod \"39b51457-06ae-4cf2-8b78-4f34bb908819\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.399639 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x5sg\" (UniqueName: \"kubernetes.io/projected/7c7ec181-e310-4832-af7a-d6a5437e565d-kube-api-access-6x5sg\") pod \"7c7ec181-e310-4832-af7a-d6a5437e565d\" (UID: \"7c7ec181-e310-4832-af7a-d6a5437e565d\") " Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.399663 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b51457-06ae-4cf2-8b78-4f34bb908819-serving-cert\") pod \"39b51457-06ae-4cf2-8b78-4f34bb908819\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.399708 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wngmk\" (UniqueName: \"kubernetes.io/projected/39b51457-06ae-4cf2-8b78-4f34bb908819-kube-api-access-wngmk\") pod \"39b51457-06ae-4cf2-8b78-4f34bb908819\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.399729 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-client-ca\") pod \"39b51457-06ae-4cf2-8b78-4f34bb908819\" (UID: \"39b51457-06ae-4cf2-8b78-4f34bb908819\") " Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.400111 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-client-ca" (OuterVolumeSpecName: "client-ca") pod "7c7ec181-e310-4832-af7a-d6a5437e565d" (UID: "7c7ec181-e310-4832-af7a-d6a5437e565d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.400844 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7c7ec181-e310-4832-af7a-d6a5437e565d" (UID: "7c7ec181-e310-4832-af7a-d6a5437e565d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.400893 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-config" (OuterVolumeSpecName: "config") pod "7c7ec181-e310-4832-af7a-d6a5437e565d" (UID: "7c7ec181-e310-4832-af7a-d6a5437e565d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.400997 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-client-ca" (OuterVolumeSpecName: "client-ca") pod "39b51457-06ae-4cf2-8b78-4f34bb908819" (UID: "39b51457-06ae-4cf2-8b78-4f34bb908819"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.401093 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.401106 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.401114 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c7ec181-e310-4832-af7a-d6a5437e565d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.401320 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-config" (OuterVolumeSpecName: "config") pod "39b51457-06ae-4cf2-8b78-4f34bb908819" (UID: "39b51457-06ae-4cf2-8b78-4f34bb908819"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.406214 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b51457-06ae-4cf2-8b78-4f34bb908819-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "39b51457-06ae-4cf2-8b78-4f34bb908819" (UID: "39b51457-06ae-4cf2-8b78-4f34bb908819"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.406245 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c7ec181-e310-4832-af7a-d6a5437e565d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7c7ec181-e310-4832-af7a-d6a5437e565d" (UID: "7c7ec181-e310-4832-af7a-d6a5437e565d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.406830 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b51457-06ae-4cf2-8b78-4f34bb908819-kube-api-access-wngmk" (OuterVolumeSpecName: "kube-api-access-wngmk") pod "39b51457-06ae-4cf2-8b78-4f34bb908819" (UID: "39b51457-06ae-4cf2-8b78-4f34bb908819"). InnerVolumeSpecName "kube-api-access-wngmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.408285 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c7ec181-e310-4832-af7a-d6a5437e565d-kube-api-access-6x5sg" (OuterVolumeSpecName: "kube-api-access-6x5sg") pod "7c7ec181-e310-4832-af7a-d6a5437e565d" (UID: "7c7ec181-e310-4832-af7a-d6a5437e565d"). InnerVolumeSpecName "kube-api-access-6x5sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.433430 4810 generic.go:334] "Generic (PLEG): container finished" podID="39b51457-06ae-4cf2-8b78-4f34bb908819" containerID="4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17" exitCode=0 Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.433491 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" event={"ID":"39b51457-06ae-4cf2-8b78-4f34bb908819","Type":"ContainerDied","Data":"4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17"} Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.433553 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" event={"ID":"39b51457-06ae-4cf2-8b78-4f34bb908819","Type":"ContainerDied","Data":"0077b49623f036e56963d75ab9115b13ecfd468901f04e484b6a8d146ca5c488"} Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.433512 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.433573 4810 scope.go:117] "RemoveContainer" containerID="4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.435785 4810 generic.go:334] "Generic (PLEG): container finished" podID="7c7ec181-e310-4832-af7a-d6a5437e565d" containerID="9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d" exitCode=0 Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.435812 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" event={"ID":"7c7ec181-e310-4832-af7a-d6a5437e565d","Type":"ContainerDied","Data":"9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d"} Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.435829 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" event={"ID":"7c7ec181-e310-4832-af7a-d6a5437e565d","Type":"ContainerDied","Data":"f8f69d7179caa22c634085bea1f9caa3a9c324d909082a61cbd232413e8e9bff"} Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.435861 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mwwrl" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.454111 4810 scope.go:117] "RemoveContainer" containerID="4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17" Jan 10 06:51:44 crc kubenswrapper[4810]: E0110 06:51:44.454663 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17\": container with ID starting with 4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17 not found: ID does not exist" containerID="4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.454696 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17"} err="failed to get container status \"4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17\": rpc error: code = NotFound desc = could not find container \"4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17\": container with ID starting with 4abeca10345b1cc995044f8985ef15ac6ecdcb2250b968e3b1c3da6822f9fa17 not found: ID does not exist" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.454721 4810 scope.go:117] "RemoveContainer" containerID="9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.463537 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwwrl"] Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.469919 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mwwrl"] Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.473332 4810 scope.go:117] "RemoveContainer" containerID="9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.474890 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g"] Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.477385 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kt27g"] Jan 10 06:51:44 crc kubenswrapper[4810]: E0110 06:51:44.485710 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d\": container with ID starting with 9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d not found: ID does not exist" containerID="9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.485813 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d"} err="failed to get container status \"9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d\": rpc error: code = NotFound desc = could not find container \"9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d\": container with ID starting with 9e0aa2787e35f64267e3c0d9e4aa65d79103d74e94dc39990df49c39dd4b4d6d not found: ID does not exist" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.502161 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c7ec181-e310-4832-af7a-d6a5437e565d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.502268 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.502326 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x5sg\" (UniqueName: \"kubernetes.io/projected/7c7ec181-e310-4832-af7a-d6a5437e565d-kube-api-access-6x5sg\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.502392 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39b51457-06ae-4cf2-8b78-4f34bb908819-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.502457 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wngmk\" (UniqueName: \"kubernetes.io/projected/39b51457-06ae-4cf2-8b78-4f34bb908819-kube-api-access-wngmk\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:44 crc kubenswrapper[4810]: I0110 06:51:44.502511 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39b51457-06ae-4cf2-8b78-4f34bb908819-client-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371175 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q"] Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371621 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371632 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371642 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371648 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371657 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c7ec181-e310-4832-af7a-d6a5437e565d" containerName="controller-manager" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371663 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c7ec181-e310-4832-af7a-d6a5437e565d" containerName="controller-manager" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371672 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371677 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371684 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371689 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371697 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b51457-06ae-4cf2-8b78-4f34bb908819" containerName="route-controller-manager" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371702 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b51457-06ae-4cf2-8b78-4f34bb908819" containerName="route-controller-manager" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371713 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371719 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371725 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371731 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371740 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371746 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371753 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371759 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371767 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371772 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371780 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371786 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371793 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371800 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371809 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371815 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371823 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371828 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371834 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371839 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371848 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371854 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371861 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371866 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" containerName="extract-content" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371875 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371882 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371890 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755f846b-01e4-435d-b86c-cbe3f917aa31" containerName="marketplace-operator" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371895 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="755f846b-01e4-435d-b86c-cbe3f917aa31" containerName="marketplace-operator" Jan 10 06:51:45 crc kubenswrapper[4810]: E0110 06:51:45.371902 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371907 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" containerName="extract-utilities" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371983 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a011bb8-70ef-4c9b-b5cd-642600d792b4" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.371993 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="505c4d65-7e70-43ba-ae57-4660f944e1dc" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.372000 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="78118904-197b-4ed9-bd6e-c02dc35f4e94" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.372008 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b646bd-b87f-45fc-8142-cef150cda498" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.372016 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda74b1c-bf42-4091-af37-e29e1494b2a6" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.372024 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="755f846b-01e4-435d-b86c-cbe3f917aa31" containerName="marketplace-operator" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.372035 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa3125f-8e13-430b-8227-39bd4c3e011b" containerName="registry-server" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.372042 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b51457-06ae-4cf2-8b78-4f34bb908819" containerName="route-controller-manager" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.372048 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c7ec181-e310-4832-af7a-d6a5437e565d" containerName="controller-manager" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.372379 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.376762 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.376808 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.376994 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.377232 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.377522 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.377751 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.385809 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gx7wx"] Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.391554 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-699fc888d4-ggrmf"] Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.392265 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.396647 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.397252 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.397364 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.397463 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.397590 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.397686 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.403064 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q"] Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.409751 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-699fc888d4-ggrmf"] Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.415277 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb904179-007a-4468-aff7-d8de35155916-serving-cert\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.415304 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-config\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.415323 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsnl5\" (UniqueName: \"kubernetes.io/projected/cb904179-007a-4468-aff7-d8de35155916-kube-api-access-nsnl5\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.415347 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-config\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.415364 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-client-ca\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.415391 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d333c55-a97f-4580-8a16-c592c8a9eb8e-serving-cert\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.415406 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzb9x\" (UniqueName: \"kubernetes.io/projected/2d333c55-a97f-4580-8a16-c592c8a9eb8e-kube-api-access-zzb9x\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.415419 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-client-ca\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.415438 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-proxy-ca-bundles\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.426418 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.516063 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d333c55-a97f-4580-8a16-c592c8a9eb8e-serving-cert\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.516123 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzb9x\" (UniqueName: \"kubernetes.io/projected/2d333c55-a97f-4580-8a16-c592c8a9eb8e-kube-api-access-zzb9x\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.516144 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-client-ca\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.516166 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-proxy-ca-bundles\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.516247 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb904179-007a-4468-aff7-d8de35155916-serving-cert\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.516266 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-config\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.516284 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsnl5\" (UniqueName: \"kubernetes.io/projected/cb904179-007a-4468-aff7-d8de35155916-kube-api-access-nsnl5\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.516334 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-config\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.516350 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-client-ca\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.517289 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-client-ca\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.518925 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-client-ca\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.519363 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-config\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.519446 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-proxy-ca-bundles\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.520470 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-config\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.523353 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d333c55-a97f-4580-8a16-c592c8a9eb8e-serving-cert\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.526649 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb904179-007a-4468-aff7-d8de35155916-serving-cert\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.533439 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzb9x\" (UniqueName: \"kubernetes.io/projected/2d333c55-a97f-4580-8a16-c592c8a9eb8e-kube-api-access-zzb9x\") pod \"controller-manager-699fc888d4-ggrmf\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.534242 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsnl5\" (UniqueName: \"kubernetes.io/projected/cb904179-007a-4468-aff7-d8de35155916-kube-api-access-nsnl5\") pod \"route-controller-manager-85dd655fd9-8r87q\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.696617 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.709353 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b51457-06ae-4cf2-8b78-4f34bb908819" path="/var/lib/kubelet/pods/39b51457-06ae-4cf2-8b78-4f34bb908819/volumes" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.710080 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c7ec181-e310-4832-af7a-d6a5437e565d" path="/var/lib/kubelet/pods/7c7ec181-e310-4832-af7a-d6a5437e565d/volumes" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.727009 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.928727 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-699fc888d4-ggrmf"] Jan 10 06:51:45 crc kubenswrapper[4810]: W0110 06:51:45.975370 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb904179_007a_4468_aff7_d8de35155916.slice/crio-f6a863661b96113dfe6985c93754e1231fbe4d9ecdcab2283d2edc2d03f347c2 WatchSource:0}: Error finding container f6a863661b96113dfe6985c93754e1231fbe4d9ecdcab2283d2edc2d03f347c2: Status 404 returned error can't find the container with id f6a863661b96113dfe6985c93754e1231fbe4d9ecdcab2283d2edc2d03f347c2 Jan 10 06:51:45 crc kubenswrapper[4810]: I0110 06:51:45.977467 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q"] Jan 10 06:51:46 crc kubenswrapper[4810]: I0110 06:51:46.450150 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" event={"ID":"cb904179-007a-4468-aff7-d8de35155916","Type":"ContainerStarted","Data":"b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb"} Jan 10 06:51:46 crc kubenswrapper[4810]: I0110 06:51:46.450442 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" event={"ID":"cb904179-007a-4468-aff7-d8de35155916","Type":"ContainerStarted","Data":"f6a863661b96113dfe6985c93754e1231fbe4d9ecdcab2283d2edc2d03f347c2"} Jan 10 06:51:46 crc kubenswrapper[4810]: I0110 06:51:46.451405 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:46 crc kubenswrapper[4810]: I0110 06:51:46.453623 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" event={"ID":"2d333c55-a97f-4580-8a16-c592c8a9eb8e","Type":"ContainerStarted","Data":"21665bde382cf148c5297dbaef0b714608beb729e81f8186c49bac5480243e79"} Jan 10 06:51:46 crc kubenswrapper[4810]: I0110 06:51:46.453655 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" event={"ID":"2d333c55-a97f-4580-8a16-c592c8a9eb8e","Type":"ContainerStarted","Data":"5aa5f6e3a37e47a3a83ccaba0083a6a9b67f24bc706a852950d2410e5bc68e51"} Jan 10 06:51:46 crc kubenswrapper[4810]: I0110 06:51:46.453833 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:46 crc kubenswrapper[4810]: I0110 06:51:46.457155 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:51:46 crc kubenswrapper[4810]: I0110 06:51:46.468982 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" podStartSLOduration=2.468969773 podStartE2EDuration="2.468969773s" podCreationTimestamp="2026-01-10 06:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:51:46.467526579 +0000 UTC m=+335.083019462" watchObservedRunningTime="2026-01-10 06:51:46.468969773 +0000 UTC m=+335.084462656" Jan 10 06:51:46 crc kubenswrapper[4810]: I0110 06:51:46.484542 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" podStartSLOduration=2.484530947 podStartE2EDuration="2.484530947s" podCreationTimestamp="2026-01-10 06:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:51:46.48296554 +0000 UTC m=+335.098458413" watchObservedRunningTime="2026-01-10 06:51:46.484530947 +0000 UTC m=+335.100023830" Jan 10 06:51:46 crc kubenswrapper[4810]: I0110 06:51:46.545649 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:51:48 crc kubenswrapper[4810]: I0110 06:51:48.066247 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 10 06:51:50 crc kubenswrapper[4810]: I0110 06:51:50.195437 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 10 06:51:52 crc kubenswrapper[4810]: I0110 06:51:52.402698 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 10 06:51:53 crc kubenswrapper[4810]: I0110 06:51:53.400641 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 10 06:51:57 crc kubenswrapper[4810]: I0110 06:51:57.366498 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 10 06:52:04 crc kubenswrapper[4810]: I0110 06:52:04.002465 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-699fc888d4-ggrmf"] Jan 10 06:52:04 crc kubenswrapper[4810]: I0110 06:52:04.003227 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" podUID="2d333c55-a97f-4580-8a16-c592c8a9eb8e" containerName="controller-manager" containerID="cri-o://21665bde382cf148c5297dbaef0b714608beb729e81f8186c49bac5480243e79" gracePeriod=30 Jan 10 06:52:04 crc kubenswrapper[4810]: I0110 06:52:04.571062 4810 generic.go:334] "Generic (PLEG): container finished" podID="2d333c55-a97f-4580-8a16-c592c8a9eb8e" containerID="21665bde382cf148c5297dbaef0b714608beb729e81f8186c49bac5480243e79" exitCode=0 Jan 10 06:52:04 crc kubenswrapper[4810]: I0110 06:52:04.571130 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" event={"ID":"2d333c55-a97f-4580-8a16-c592c8a9eb8e","Type":"ContainerDied","Data":"21665bde382cf148c5297dbaef0b714608beb729e81f8186c49bac5480243e79"} Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.197370 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.240985 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2"] Jan 10 06:52:05 crc kubenswrapper[4810]: E0110 06:52:05.241389 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d333c55-a97f-4580-8a16-c592c8a9eb8e" containerName="controller-manager" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.241429 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d333c55-a97f-4580-8a16-c592c8a9eb8e" containerName="controller-manager" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.241645 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d333c55-a97f-4580-8a16-c592c8a9eb8e" containerName="controller-manager" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.242388 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.253384 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2"] Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.288260 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-client-ca\") pod \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.288341 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-config\") pod \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.288392 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d333c55-a97f-4580-8a16-c592c8a9eb8e-serving-cert\") pod \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.288433 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-proxy-ca-bundles\") pod \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.288455 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzb9x\" (UniqueName: \"kubernetes.io/projected/2d333c55-a97f-4580-8a16-c592c8a9eb8e-kube-api-access-zzb9x\") pod \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\" (UID: \"2d333c55-a97f-4580-8a16-c592c8a9eb8e\") " Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.289522 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2d333c55-a97f-4580-8a16-c592c8a9eb8e" (UID: "2d333c55-a97f-4580-8a16-c592c8a9eb8e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.289597 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-config" (OuterVolumeSpecName: "config") pod "2d333c55-a97f-4580-8a16-c592c8a9eb8e" (UID: "2d333c55-a97f-4580-8a16-c592c8a9eb8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.290342 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-client-ca" (OuterVolumeSpecName: "client-ca") pod "2d333c55-a97f-4580-8a16-c592c8a9eb8e" (UID: "2d333c55-a97f-4580-8a16-c592c8a9eb8e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.296597 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d333c55-a97f-4580-8a16-c592c8a9eb8e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d333c55-a97f-4580-8a16-c592c8a9eb8e" (UID: "2d333c55-a97f-4580-8a16-c592c8a9eb8e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.298476 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d333c55-a97f-4580-8a16-c592c8a9eb8e-kube-api-access-zzb9x" (OuterVolumeSpecName: "kube-api-access-zzb9x") pod "2d333c55-a97f-4580-8a16-c592c8a9eb8e" (UID: "2d333c55-a97f-4580-8a16-c592c8a9eb8e"). InnerVolumeSpecName "kube-api-access-zzb9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.389352 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-serving-cert\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.389399 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-config\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.389418 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26sfw\" (UniqueName: \"kubernetes.io/projected/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-kube-api-access-26sfw\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.389644 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-client-ca\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.389694 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-proxy-ca-bundles\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.389862 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.389899 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzb9x\" (UniqueName: \"kubernetes.io/projected/2d333c55-a97f-4580-8a16-c592c8a9eb8e-kube-api-access-zzb9x\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.389915 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.389927 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d333c55-a97f-4580-8a16-c592c8a9eb8e-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.389939 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d333c55-a97f-4580-8a16-c592c8a9eb8e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.490807 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-serving-cert\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.490899 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-config\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.490958 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26sfw\" (UniqueName: \"kubernetes.io/projected/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-kube-api-access-26sfw\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.491055 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-client-ca\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.491110 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-proxy-ca-bundles\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.492706 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-client-ca\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.493005 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-proxy-ca-bundles\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.493281 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-config\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.501549 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-serving-cert\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.519422 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26sfw\" (UniqueName: \"kubernetes.io/projected/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-kube-api-access-26sfw\") pod \"controller-manager-67ff4bc6cb-4kmm2\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.578677 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" event={"ID":"2d333c55-a97f-4580-8a16-c592c8a9eb8e","Type":"ContainerDied","Data":"5aa5f6e3a37e47a3a83ccaba0083a6a9b67f24bc706a852950d2410e5bc68e51"} Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.578754 4810 scope.go:117] "RemoveContainer" containerID="21665bde382cf148c5297dbaef0b714608beb729e81f8186c49bac5480243e79" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.578755 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.578752 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-699fc888d4-ggrmf" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.624516 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-699fc888d4-ggrmf"] Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.635810 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-699fc888d4-ggrmf"] Jan 10 06:52:05 crc kubenswrapper[4810]: E0110 06:52:05.638960 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d333c55_a97f_4580_8a16_c592c8a9eb8e.slice/crio-5aa5f6e3a37e47a3a83ccaba0083a6a9b67f24bc706a852950d2410e5bc68e51\": RecentStats: unable to find data in memory cache]" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.713572 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d333c55-a97f-4580-8a16-c592c8a9eb8e" path="/var/lib/kubelet/pods/2d333c55-a97f-4580-8a16-c592c8a9eb8e/volumes" Jan 10 06:52:05 crc kubenswrapper[4810]: I0110 06:52:05.808916 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2"] Jan 10 06:52:06 crc kubenswrapper[4810]: I0110 06:52:06.601602 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" event={"ID":"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21","Type":"ContainerStarted","Data":"4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b"} Jan 10 06:52:06 crc kubenswrapper[4810]: I0110 06:52:06.601656 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" event={"ID":"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21","Type":"ContainerStarted","Data":"34e2356251293bd488efa47b8642a965d75bef2a08794af94555a94cf3abbe62"} Jan 10 06:52:06 crc kubenswrapper[4810]: I0110 06:52:06.602053 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:06 crc kubenswrapper[4810]: I0110 06:52:06.612981 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:06 crc kubenswrapper[4810]: I0110 06:52:06.628373 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" podStartSLOduration=2.628359382 podStartE2EDuration="2.628359382s" podCreationTimestamp="2026-01-10 06:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:52:06.625291888 +0000 UTC m=+355.240784771" watchObservedRunningTime="2026-01-10 06:52:06.628359382 +0000 UTC m=+355.243852265" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.421665 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" podUID="d05445dd-fa70-4239-af6e-56f88234a35b" containerName="oauth-openshift" containerID="cri-o://22566abca16bb3eb6a5299acb0b73f21cfd0ce1ca090e7068ad777b1c2c37147" gracePeriod=15 Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.628030 4810 generic.go:334] "Generic (PLEG): container finished" podID="d05445dd-fa70-4239-af6e-56f88234a35b" containerID="22566abca16bb3eb6a5299acb0b73f21cfd0ce1ca090e7068ad777b1c2c37147" exitCode=0 Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.628168 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" event={"ID":"d05445dd-fa70-4239-af6e-56f88234a35b","Type":"ContainerDied","Data":"22566abca16bb3eb6a5299acb0b73f21cfd0ce1ca090e7068ad777b1c2c37147"} Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.954071 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.987898 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7d9c768c99-6tv85"] Jan 10 06:52:10 crc kubenswrapper[4810]: E0110 06:52:10.988145 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05445dd-fa70-4239-af6e-56f88234a35b" containerName="oauth-openshift" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.988160 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05445dd-fa70-4239-af6e-56f88234a35b" containerName="oauth-openshift" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.988304 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d05445dd-fa70-4239-af6e-56f88234a35b" containerName="oauth-openshift" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.988761 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.994044 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d9c768c99-6tv85"] Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.995895 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-cliconfig\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.995957 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-login\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.996016 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-router-certs\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.996052 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-provider-selection\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.996117 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-idp-0-file-data\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.996742 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.996164 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-serving-cert\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997010 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-trusted-ca-bundle\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997050 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-error\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997091 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-service-ca\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997134 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d05445dd-fa70-4239-af6e-56f88234a35b-audit-dir\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997178 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbtr4\" (UniqueName: \"kubernetes.io/projected/d05445dd-fa70-4239-af6e-56f88234a35b-kube-api-access-rbtr4\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997246 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-ocp-branding-template\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997283 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-session\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997406 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-audit-policies\") pod \"d05445dd-fa70-4239-af6e-56f88234a35b\" (UID: \"d05445dd-fa70-4239-af6e-56f88234a35b\") " Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997541 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997549 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997588 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.997624 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998062 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998298 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-audit-policies\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998340 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998407 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998440 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-session\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998603 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d05445dd-fa70-4239-af6e-56f88234a35b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998672 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998720 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998788 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z68kq\" (UniqueName: \"kubernetes.io/projected/eb19916c-7ded-4e7a-8066-3495cb7707b9-kube-api-access-z68kq\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998823 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb19916c-7ded-4e7a-8066-3495cb7707b9-audit-dir\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998861 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998867 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.998968 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.999083 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.999181 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:10 crc kubenswrapper[4810]: I0110 06:52:10.999236 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.001914 4810 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d05445dd-fa70-4239-af6e-56f88234a35b-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.001964 4810 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.002262 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.002020 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.002461 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d05445dd-fa70-4239-af6e-56f88234a35b-kube-api-access-rbtr4" (OuterVolumeSpecName: "kube-api-access-rbtr4") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "kube-api-access-rbtr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.002931 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.003512 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.005659 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.006179 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.026263 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.029265 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.029560 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d05445dd-fa70-4239-af6e-56f88234a35b" (UID: "d05445dd-fa70-4239-af6e-56f88234a35b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.103493 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.103592 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.103631 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.103666 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.103704 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-audit-policies\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.103734 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.103789 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.103822 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-session\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104328 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104378 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104415 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z68kq\" (UniqueName: \"kubernetes.io/projected/eb19916c-7ded-4e7a-8066-3495cb7707b9-kube-api-access-z68kq\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104433 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb19916c-7ded-4e7a-8066-3495cb7707b9-audit-dir\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104457 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104481 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104528 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104540 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104550 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104560 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbtr4\" (UniqueName: \"kubernetes.io/projected/d05445dd-fa70-4239-af6e-56f88234a35b-kube-api-access-rbtr4\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104571 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104581 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104594 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104603 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104614 4810 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d05445dd-fa70-4239-af6e-56f88234a35b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.104892 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb19916c-7ded-4e7a-8066-3495cb7707b9-audit-dir\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.107681 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.108019 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-audit-policies\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.108390 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.108783 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-session\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.108788 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.108985 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.109370 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.110654 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-template-error\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.110743 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-template-login\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.112516 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.113109 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.113820 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eb19916c-7ded-4e7a-8066-3495cb7707b9-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.122987 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z68kq\" (UniqueName: \"kubernetes.io/projected/eb19916c-7ded-4e7a-8066-3495cb7707b9-kube-api-access-z68kq\") pod \"oauth-openshift-7d9c768c99-6tv85\" (UID: \"eb19916c-7ded-4e7a-8066-3495cb7707b9\") " pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.303852 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.636033 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" event={"ID":"d05445dd-fa70-4239-af6e-56f88234a35b","Type":"ContainerDied","Data":"461cd63eeb18322125d6262c98532ca983430de4ab307b4282efc83e1a68e74b"} Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.636307 4810 scope.go:117] "RemoveContainer" containerID="22566abca16bb3eb6a5299acb0b73f21cfd0ce1ca090e7068ad777b1c2c37147" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.636097 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gx7wx" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.675960 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gx7wx"] Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.678741 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gx7wx"] Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.701782 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d05445dd-fa70-4239-af6e-56f88234a35b" path="/var/lib/kubelet/pods/d05445dd-fa70-4239-af6e-56f88234a35b/volumes" Jan 10 06:52:11 crc kubenswrapper[4810]: I0110 06:52:11.755270 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d9c768c99-6tv85"] Jan 10 06:52:12 crc kubenswrapper[4810]: I0110 06:52:12.644899 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" event={"ID":"eb19916c-7ded-4e7a-8066-3495cb7707b9","Type":"ContainerStarted","Data":"6ff9e03569dbda9840e99429a61b76a03f3d0a8bef86b89d76ceb4bd5f046b26"} Jan 10 06:52:13 crc kubenswrapper[4810]: I0110 06:52:13.654590 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" event={"ID":"eb19916c-7ded-4e7a-8066-3495cb7707b9","Type":"ContainerStarted","Data":"8c79b8ffda29224e7bac17e97407edf35cb6079e0fdf4dee76c36b622c120ee0"} Jan 10 06:52:13 crc kubenswrapper[4810]: I0110 06:52:13.655344 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:13 crc kubenswrapper[4810]: I0110 06:52:13.664255 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" Jan 10 06:52:13 crc kubenswrapper[4810]: I0110 06:52:13.685422 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7d9c768c99-6tv85" podStartSLOduration=28.68540217 podStartE2EDuration="28.68540217s" podCreationTimestamp="2026-01-10 06:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:52:13.683188858 +0000 UTC m=+362.298681781" watchObservedRunningTime="2026-01-10 06:52:13.68540217 +0000 UTC m=+362.300895063" Jan 10 06:52:20 crc kubenswrapper[4810]: I0110 06:52:20.883509 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 06:52:20 crc kubenswrapper[4810]: I0110 06:52:20.884325 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.021164 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2"] Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.021432 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" podUID="6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21" containerName="controller-manager" containerID="cri-o://4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b" gracePeriod=30 Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.098818 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q"] Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.099048 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" podUID="cb904179-007a-4468-aff7-d8de35155916" containerName="route-controller-manager" containerID="cri-o://b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb" gracePeriod=30 Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.594780 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.601128 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.690986 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-config\") pod \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.691036 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-client-ca\") pod \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.691063 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-config\") pod \"cb904179-007a-4468-aff7-d8de35155916\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.691107 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsnl5\" (UniqueName: \"kubernetes.io/projected/cb904179-007a-4468-aff7-d8de35155916-kube-api-access-nsnl5\") pod \"cb904179-007a-4468-aff7-d8de35155916\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.691164 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26sfw\" (UniqueName: \"kubernetes.io/projected/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-kube-api-access-26sfw\") pod \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.691206 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-proxy-ca-bundles\") pod \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.691230 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-client-ca\") pod \"cb904179-007a-4468-aff7-d8de35155916\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.691253 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb904179-007a-4468-aff7-d8de35155916-serving-cert\") pod \"cb904179-007a-4468-aff7-d8de35155916\" (UID: \"cb904179-007a-4468-aff7-d8de35155916\") " Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.691274 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-serving-cert\") pod \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\" (UID: \"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21\") " Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.692174 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-config" (OuterVolumeSpecName: "config") pod "6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21" (UID: "6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.692204 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21" (UID: "6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.692408 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-client-ca" (OuterVolumeSpecName: "client-ca") pod "6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21" (UID: "6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.692635 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-config" (OuterVolumeSpecName: "config") pod "cb904179-007a-4468-aff7-d8de35155916" (UID: "cb904179-007a-4468-aff7-d8de35155916"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.692628 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-client-ca" (OuterVolumeSpecName: "client-ca") pod "cb904179-007a-4468-aff7-d8de35155916" (UID: "cb904179-007a-4468-aff7-d8de35155916"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.696817 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb904179-007a-4468-aff7-d8de35155916-kube-api-access-nsnl5" (OuterVolumeSpecName: "kube-api-access-nsnl5") pod "cb904179-007a-4468-aff7-d8de35155916" (UID: "cb904179-007a-4468-aff7-d8de35155916"). InnerVolumeSpecName "kube-api-access-nsnl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.696939 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb904179-007a-4468-aff7-d8de35155916-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cb904179-007a-4468-aff7-d8de35155916" (UID: "cb904179-007a-4468-aff7-d8de35155916"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.697112 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-kube-api-access-26sfw" (OuterVolumeSpecName: "kube-api-access-26sfw") pod "6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21" (UID: "6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21"). InnerVolumeSpecName "kube-api-access-26sfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.697464 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21" (UID: "6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.733595 4810 generic.go:334] "Generic (PLEG): container finished" podID="cb904179-007a-4468-aff7-d8de35155916" containerID="b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb" exitCode=0 Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.733693 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" event={"ID":"cb904179-007a-4468-aff7-d8de35155916","Type":"ContainerDied","Data":"b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb"} Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.733745 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.734165 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q" event={"ID":"cb904179-007a-4468-aff7-d8de35155916","Type":"ContainerDied","Data":"f6a863661b96113dfe6985c93754e1231fbe4d9ecdcab2283d2edc2d03f347c2"} Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.734208 4810 scope.go:117] "RemoveContainer" containerID="b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.735135 4810 generic.go:334] "Generic (PLEG): container finished" podID="6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21" containerID="4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b" exitCode=0 Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.735173 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" event={"ID":"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21","Type":"ContainerDied","Data":"4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b"} Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.735187 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.735214 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2" event={"ID":"6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21","Type":"ContainerDied","Data":"34e2356251293bd488efa47b8642a965d75bef2a08794af94555a94cf3abbe62"} Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.759291 4810 scope.go:117] "RemoveContainer" containerID="b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb" Jan 10 06:52:24 crc kubenswrapper[4810]: E0110 06:52:24.759667 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb\": container with ID starting with b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb not found: ID does not exist" containerID="b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.759716 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb"} err="failed to get container status \"b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb\": rpc error: code = NotFound desc = could not find container \"b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb\": container with ID starting with b0d2d6a41b43c7f62cc9130fef3dab25d90da65fd850101ae1ca40524d226edb not found: ID does not exist" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.759753 4810 scope.go:117] "RemoveContainer" containerID="4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.765055 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q"] Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.770055 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85dd655fd9-8r87q"] Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.775369 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2"] Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.783869 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67ff4bc6cb-4kmm2"] Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.784911 4810 scope.go:117] "RemoveContainer" containerID="4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b" Jan 10 06:52:24 crc kubenswrapper[4810]: E0110 06:52:24.785463 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b\": container with ID starting with 4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b not found: ID does not exist" containerID="4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.785559 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b"} err="failed to get container status \"4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b\": rpc error: code = NotFound desc = could not find container \"4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b\": container with ID starting with 4dd07f4dc9eac826346e33fbf5506d9c31e0809276f196a60b733dfadf70d46b not found: ID does not exist" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.791845 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsnl5\" (UniqueName: \"kubernetes.io/projected/cb904179-007a-4468-aff7-d8de35155916-kube-api-access-nsnl5\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.791881 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26sfw\" (UniqueName: \"kubernetes.io/projected/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-kube-api-access-26sfw\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.791894 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.791906 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb904179-007a-4468-aff7-d8de35155916-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.791920 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-client-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.791930 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.791941 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.791952 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21-client-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:24 crc kubenswrapper[4810]: I0110 06:52:24.791963 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb904179-007a-4468-aff7-d8de35155916-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.405044 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b69787f89-2jn84"] Jan 10 06:52:25 crc kubenswrapper[4810]: E0110 06:52:25.405678 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21" containerName="controller-manager" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.405701 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21" containerName="controller-manager" Jan 10 06:52:25 crc kubenswrapper[4810]: E0110 06:52:25.405735 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb904179-007a-4468-aff7-d8de35155916" containerName="route-controller-manager" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.405747 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb904179-007a-4468-aff7-d8de35155916" containerName="route-controller-manager" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.405907 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21" containerName="controller-manager" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.405931 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb904179-007a-4468-aff7-d8de35155916" containerName="route-controller-manager" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.406593 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.411046 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.412061 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs"] Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.413351 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.413363 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.414521 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.414800 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.415014 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.415963 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.425752 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.426232 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.428184 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.428375 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.428681 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.428895 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.429130 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.431423 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b69787f89-2jn84"] Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.437159 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs"] Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.499390 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-config\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.499477 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69qr7\" (UniqueName: \"kubernetes.io/projected/bbab7a0d-953b-40be-82be-a060e3b52812-kube-api-access-69qr7\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.499546 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-proxy-ca-bundles\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.499590 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr6lh\" (UniqueName: \"kubernetes.io/projected/d400d1d6-4f1c-41eb-8956-632545125572-kube-api-access-sr6lh\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.499626 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-client-ca\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.499806 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbab7a0d-953b-40be-82be-a060e3b52812-serving-cert\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.499878 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-config\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.499916 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d400d1d6-4f1c-41eb-8956-632545125572-serving-cert\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.499955 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-client-ca\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.601576 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr6lh\" (UniqueName: \"kubernetes.io/projected/d400d1d6-4f1c-41eb-8956-632545125572-kube-api-access-sr6lh\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.601660 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-client-ca\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.601758 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbab7a0d-953b-40be-82be-a060e3b52812-serving-cert\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.601809 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-config\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.601849 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d400d1d6-4f1c-41eb-8956-632545125572-serving-cert\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.601890 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-client-ca\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.601932 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-config\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.601968 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69qr7\" (UniqueName: \"kubernetes.io/projected/bbab7a0d-953b-40be-82be-a060e3b52812-kube-api-access-69qr7\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.602019 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-proxy-ca-bundles\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.603132 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-client-ca\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.603700 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-client-ca\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.604166 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-config\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.604512 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-config\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.605189 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-proxy-ca-bundles\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.608316 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d400d1d6-4f1c-41eb-8956-632545125572-serving-cert\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.609135 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbab7a0d-953b-40be-82be-a060e3b52812-serving-cert\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.623318 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69qr7\" (UniqueName: \"kubernetes.io/projected/bbab7a0d-953b-40be-82be-a060e3b52812-kube-api-access-69qr7\") pod \"route-controller-manager-776567bdbb-zdgqs\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.634040 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr6lh\" (UniqueName: \"kubernetes.io/projected/d400d1d6-4f1c-41eb-8956-632545125572-kube-api-access-sr6lh\") pod \"controller-manager-6b69787f89-2jn84\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.706435 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21" path="/var/lib/kubelet/pods/6769f1e4-ddd4-4f49-8f2d-eefc93c8ce21/volumes" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.707913 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb904179-007a-4468-aff7-d8de35155916" path="/var/lib/kubelet/pods/cb904179-007a-4468-aff7-d8de35155916/volumes" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.744368 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:25 crc kubenswrapper[4810]: I0110 06:52:25.759711 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.047971 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs"] Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.267722 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b69787f89-2jn84"] Jan 10 06:52:26 crc kubenswrapper[4810]: W0110 06:52:26.273716 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd400d1d6_4f1c_41eb_8956_632545125572.slice/crio-3efa6c1f8514a7823f6fd549954bbba111d020ba6614f5f03d72e673ae86e525 WatchSource:0}: Error finding container 3efa6c1f8514a7823f6fd549954bbba111d020ba6614f5f03d72e673ae86e525: Status 404 returned error can't find the container with id 3efa6c1f8514a7823f6fd549954bbba111d020ba6614f5f03d72e673ae86e525 Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.747984 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" event={"ID":"bbab7a0d-953b-40be-82be-a060e3b52812","Type":"ContainerStarted","Data":"47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad"} Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.748274 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" event={"ID":"bbab7a0d-953b-40be-82be-a060e3b52812","Type":"ContainerStarted","Data":"667367e1b00286173056f4a10d5bd7f984fd67b0b9d9f41bdd32b4857ca0859e"} Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.748297 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.749264 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" event={"ID":"d400d1d6-4f1c-41eb-8956-632545125572","Type":"ContainerStarted","Data":"7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865"} Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.749288 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" event={"ID":"d400d1d6-4f1c-41eb-8956-632545125572","Type":"ContainerStarted","Data":"3efa6c1f8514a7823f6fd549954bbba111d020ba6614f5f03d72e673ae86e525"} Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.749573 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.753823 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.754442 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.787235 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" podStartSLOduration=2.783523168 podStartE2EDuration="2.783523168s" podCreationTimestamp="2026-01-10 06:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:52:26.780810083 +0000 UTC m=+375.396302966" watchObservedRunningTime="2026-01-10 06:52:26.783523168 +0000 UTC m=+375.399016051" Jan 10 06:52:26 crc kubenswrapper[4810]: I0110 06:52:26.842923 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" podStartSLOduration=2.842906166 podStartE2EDuration="2.842906166s" podCreationTimestamp="2026-01-10 06:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:52:26.839896504 +0000 UTC m=+375.455389387" watchObservedRunningTime="2026-01-10 06:52:26.842906166 +0000 UTC m=+375.458399049" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.233670 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hcfnc"] Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.235297 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.255424 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hcfnc"] Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.365077 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-registry-certificates\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.365250 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.365320 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.365365 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.365405 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-trusted-ca\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.365888 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-registry-tls\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.365921 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-bound-sa-token\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.366135 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j9jh\" (UniqueName: \"kubernetes.io/projected/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-kube-api-access-9j9jh\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.391044 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.467233 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j9jh\" (UniqueName: \"kubernetes.io/projected/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-kube-api-access-9j9jh\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.467323 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-registry-certificates\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.467369 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.467404 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.467441 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-trusted-ca\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.467487 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-registry-tls\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.467516 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-bound-sa-token\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.468236 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.468411 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-registry-certificates\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.469212 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-trusted-ca\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.476642 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-registry-tls\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.479179 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.486462 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-bound-sa-token\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.496432 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j9jh\" (UniqueName: \"kubernetes.io/projected/86cc3ad5-2a66-4d2c-b182-66a36a4b0458-kube-api-access-9j9jh\") pod \"image-registry-66df7c8f76-hcfnc\" (UID: \"86cc3ad5-2a66-4d2c-b182-66a36a4b0458\") " pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:30 crc kubenswrapper[4810]: I0110 06:52:30.571105 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:31 crc kubenswrapper[4810]: I0110 06:52:31.102842 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hcfnc"] Jan 10 06:52:31 crc kubenswrapper[4810]: W0110 06:52:31.110778 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86cc3ad5_2a66_4d2c_b182_66a36a4b0458.slice/crio-360e735954f5dbb7dd7eeb7dd54987a4c9fb24b82831e8b24954c5bef031d9a3 WatchSource:0}: Error finding container 360e735954f5dbb7dd7eeb7dd54987a4c9fb24b82831e8b24954c5bef031d9a3: Status 404 returned error can't find the container with id 360e735954f5dbb7dd7eeb7dd54987a4c9fb24b82831e8b24954c5bef031d9a3 Jan 10 06:52:31 crc kubenswrapper[4810]: I0110 06:52:31.779575 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" event={"ID":"86cc3ad5-2a66-4d2c-b182-66a36a4b0458","Type":"ContainerStarted","Data":"360e735954f5dbb7dd7eeb7dd54987a4c9fb24b82831e8b24954c5bef031d9a3"} Jan 10 06:52:32 crc kubenswrapper[4810]: I0110 06:52:32.784637 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" event={"ID":"86cc3ad5-2a66-4d2c-b182-66a36a4b0458","Type":"ContainerStarted","Data":"49a5c15c815060cecec4a466cb8c93c64bc8e8a0410aa9b007a9526e9e377690"} Jan 10 06:52:32 crc kubenswrapper[4810]: I0110 06:52:32.784783 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:32 crc kubenswrapper[4810]: I0110 06:52:32.806421 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" podStartSLOduration=2.806406242 podStartE2EDuration="2.806406242s" podCreationTimestamp="2026-01-10 06:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:52:32.804485226 +0000 UTC m=+381.419978149" watchObservedRunningTime="2026-01-10 06:52:32.806406242 +0000 UTC m=+381.421899125" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.002340 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b69787f89-2jn84"] Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.003034 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" podUID="d400d1d6-4f1c-41eb-8956-632545125572" containerName="controller-manager" containerID="cri-o://7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865" gracePeriod=30 Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.022937 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs"] Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.023297 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" podUID="bbab7a0d-953b-40be-82be-a060e3b52812" containerName="route-controller-manager" containerID="cri-o://47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad" gracePeriod=30 Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.582910 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.626060 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.677348 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-client-ca\") pod \"bbab7a0d-953b-40be-82be-a060e3b52812\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.677502 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69qr7\" (UniqueName: \"kubernetes.io/projected/bbab7a0d-953b-40be-82be-a060e3b52812-kube-api-access-69qr7\") pod \"bbab7a0d-953b-40be-82be-a060e3b52812\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.677664 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-config\") pod \"bbab7a0d-953b-40be-82be-a060e3b52812\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.677732 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbab7a0d-953b-40be-82be-a060e3b52812-serving-cert\") pod \"bbab7a0d-953b-40be-82be-a060e3b52812\" (UID: \"bbab7a0d-953b-40be-82be-a060e3b52812\") " Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.678106 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-client-ca" (OuterVolumeSpecName: "client-ca") pod "bbab7a0d-953b-40be-82be-a060e3b52812" (UID: "bbab7a0d-953b-40be-82be-a060e3b52812"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.678717 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-config" (OuterVolumeSpecName: "config") pod "bbab7a0d-953b-40be-82be-a060e3b52812" (UID: "bbab7a0d-953b-40be-82be-a060e3b52812"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.685735 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbab7a0d-953b-40be-82be-a060e3b52812-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bbab7a0d-953b-40be-82be-a060e3b52812" (UID: "bbab7a0d-953b-40be-82be-a060e3b52812"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.687694 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbab7a0d-953b-40be-82be-a060e3b52812-kube-api-access-69qr7" (OuterVolumeSpecName: "kube-api-access-69qr7") pod "bbab7a0d-953b-40be-82be-a060e3b52812" (UID: "bbab7a0d-953b-40be-82be-a060e3b52812"). InnerVolumeSpecName "kube-api-access-69qr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.779569 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-config\") pod \"d400d1d6-4f1c-41eb-8956-632545125572\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.779706 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr6lh\" (UniqueName: \"kubernetes.io/projected/d400d1d6-4f1c-41eb-8956-632545125572-kube-api-access-sr6lh\") pod \"d400d1d6-4f1c-41eb-8956-632545125572\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.779765 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-client-ca\") pod \"d400d1d6-4f1c-41eb-8956-632545125572\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.779810 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-proxy-ca-bundles\") pod \"d400d1d6-4f1c-41eb-8956-632545125572\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.779895 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d400d1d6-4f1c-41eb-8956-632545125572-serving-cert\") pod \"d400d1d6-4f1c-41eb-8956-632545125572\" (UID: \"d400d1d6-4f1c-41eb-8956-632545125572\") " Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.780494 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbab7a0d-953b-40be-82be-a060e3b52812-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.780548 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-client-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.780579 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69qr7\" (UniqueName: \"kubernetes.io/projected/bbab7a0d-953b-40be-82be-a060e3b52812-kube-api-access-69qr7\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.780607 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbab7a0d-953b-40be-82be-a060e3b52812-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.780713 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-client-ca" (OuterVolumeSpecName: "client-ca") pod "d400d1d6-4f1c-41eb-8956-632545125572" (UID: "d400d1d6-4f1c-41eb-8956-632545125572"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.780957 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d400d1d6-4f1c-41eb-8956-632545125572" (UID: "d400d1d6-4f1c-41eb-8956-632545125572"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.781139 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-config" (OuterVolumeSpecName: "config") pod "d400d1d6-4f1c-41eb-8956-632545125572" (UID: "d400d1d6-4f1c-41eb-8956-632545125572"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.782903 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d400d1d6-4f1c-41eb-8956-632545125572-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d400d1d6-4f1c-41eb-8956-632545125572" (UID: "d400d1d6-4f1c-41eb-8956-632545125572"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.784062 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d400d1d6-4f1c-41eb-8956-632545125572-kube-api-access-sr6lh" (OuterVolumeSpecName: "kube-api-access-sr6lh") pod "d400d1d6-4f1c-41eb-8956-632545125572" (UID: "d400d1d6-4f1c-41eb-8956-632545125572"). InnerVolumeSpecName "kube-api-access-sr6lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.881677 4810 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d400d1d6-4f1c-41eb-8956-632545125572-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.881722 4810 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.881736 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr6lh\" (UniqueName: \"kubernetes.io/projected/d400d1d6-4f1c-41eb-8956-632545125572-kube-api-access-sr6lh\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.881751 4810 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-client-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.881762 4810 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d400d1d6-4f1c-41eb-8956-632545125572-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.890226 4810 generic.go:334] "Generic (PLEG): container finished" podID="bbab7a0d-953b-40be-82be-a060e3b52812" containerID="47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad" exitCode=0 Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.890314 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" event={"ID":"bbab7a0d-953b-40be-82be-a060e3b52812","Type":"ContainerDied","Data":"47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad"} Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.890350 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" event={"ID":"bbab7a0d-953b-40be-82be-a060e3b52812","Type":"ContainerDied","Data":"667367e1b00286173056f4a10d5bd7f984fd67b0b9d9f41bdd32b4857ca0859e"} Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.890372 4810 scope.go:117] "RemoveContainer" containerID="47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.890375 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.892532 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.892487 4810 generic.go:334] "Generic (PLEG): container finished" podID="d400d1d6-4f1c-41eb-8956-632545125572" containerID="7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865" exitCode=0 Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.892601 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" event={"ID":"d400d1d6-4f1c-41eb-8956-632545125572","Type":"ContainerDied","Data":"7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865"} Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.892637 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b69787f89-2jn84" event={"ID":"d400d1d6-4f1c-41eb-8956-632545125572","Type":"ContainerDied","Data":"3efa6c1f8514a7823f6fd549954bbba111d020ba6614f5f03d72e673ae86e525"} Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.917936 4810 scope.go:117] "RemoveContainer" containerID="47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad" Jan 10 06:52:44 crc kubenswrapper[4810]: E0110 06:52:44.918465 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad\": container with ID starting with 47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad not found: ID does not exist" containerID="47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.918526 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad"} err="failed to get container status \"47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad\": rpc error: code = NotFound desc = could not find container \"47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad\": container with ID starting with 47a08827ee9d4c0397edac6f96305edf8b27c3f03d8b3185fdbf72f6f6c100ad not found: ID does not exist" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.918555 4810 scope.go:117] "RemoveContainer" containerID="7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.922237 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b69787f89-2jn84"] Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.941649 4810 scope.go:117] "RemoveContainer" containerID="7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865" Jan 10 06:52:44 crc kubenswrapper[4810]: E0110 06:52:44.942168 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865\": container with ID starting with 7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865 not found: ID does not exist" containerID="7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.942261 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865"} err="failed to get container status \"7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865\": rpc error: code = NotFound desc = could not find container \"7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865\": container with ID starting with 7b51fe628085c05927042c69d39338f307986e31709239f9edcca2e34aa36865 not found: ID does not exist" Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.949371 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b69787f89-2jn84"] Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.955263 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs"] Jan 10 06:52:44 crc kubenswrapper[4810]: I0110 06:52:44.962798 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776567bdbb-zdgqs"] Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.414883 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw"] Jan 10 06:52:45 crc kubenswrapper[4810]: E0110 06:52:45.415171 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbab7a0d-953b-40be-82be-a060e3b52812" containerName="route-controller-manager" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.415237 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbab7a0d-953b-40be-82be-a060e3b52812" containerName="route-controller-manager" Jan 10 06:52:45 crc kubenswrapper[4810]: E0110 06:52:45.415263 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d400d1d6-4f1c-41eb-8956-632545125572" containerName="controller-manager" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.415271 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d400d1d6-4f1c-41eb-8956-632545125572" containerName="controller-manager" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.415390 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d400d1d6-4f1c-41eb-8956-632545125572" containerName="controller-manager" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.415420 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbab7a0d-953b-40be-82be-a060e3b52812" containerName="route-controller-manager" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.415926 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.420869 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-968fc58dd-jxb6j"] Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.421603 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.422225 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.422254 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.422560 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.422873 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.422926 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.423025 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.425246 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.425614 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.425668 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.427409 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.427429 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.427581 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.432019 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw"] Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.439140 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.440017 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-968fc58dd-jxb6j"] Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.497995 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7jwl\" (UniqueName: \"kubernetes.io/projected/84388251-430b-48a1-841f-fbf93e379d3b-kube-api-access-r7jwl\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.498075 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-config\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.498131 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-serving-cert\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.498177 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84388251-430b-48a1-841f-fbf93e379d3b-client-ca\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.498246 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-client-ca\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.498316 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84388251-430b-48a1-841f-fbf93e379d3b-config\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.498346 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84388251-430b-48a1-841f-fbf93e379d3b-proxy-ca-bundles\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.498381 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84388251-430b-48a1-841f-fbf93e379d3b-serving-cert\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.498434 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6h65\" (UniqueName: \"kubernetes.io/projected/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-kube-api-access-v6h65\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.599989 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6h65\" (UniqueName: \"kubernetes.io/projected/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-kube-api-access-v6h65\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.600052 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7jwl\" (UniqueName: \"kubernetes.io/projected/84388251-430b-48a1-841f-fbf93e379d3b-kube-api-access-r7jwl\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.600089 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-config\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.600137 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-serving-cert\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.600179 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84388251-430b-48a1-841f-fbf93e379d3b-client-ca\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.600245 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-client-ca\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.600282 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84388251-430b-48a1-841f-fbf93e379d3b-proxy-ca-bundles\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.600306 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84388251-430b-48a1-841f-fbf93e379d3b-config\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.600332 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84388251-430b-48a1-841f-fbf93e379d3b-serving-cert\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.601465 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-client-ca\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.602015 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84388251-430b-48a1-841f-fbf93e379d3b-config\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.602518 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84388251-430b-48a1-841f-fbf93e379d3b-proxy-ca-bundles\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.602690 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-config\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.602969 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84388251-430b-48a1-841f-fbf93e379d3b-client-ca\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.603905 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84388251-430b-48a1-841f-fbf93e379d3b-serving-cert\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.604968 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-serving-cert\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.617820 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6h65\" (UniqueName: \"kubernetes.io/projected/aac6c8e6-4464-495a-b67e-bd9af8f6ff41-kube-api-access-v6h65\") pod \"route-controller-manager-7f9f6bc6d4-cxqfw\" (UID: \"aac6c8e6-4464-495a-b67e-bd9af8f6ff41\") " pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.633634 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7jwl\" (UniqueName: \"kubernetes.io/projected/84388251-430b-48a1-841f-fbf93e379d3b-kube-api-access-r7jwl\") pod \"controller-manager-968fc58dd-jxb6j\" (UID: \"84388251-430b-48a1-841f-fbf93e379d3b\") " pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.699626 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbab7a0d-953b-40be-82be-a060e3b52812" path="/var/lib/kubelet/pods/bbab7a0d-953b-40be-82be-a060e3b52812/volumes" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.700392 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d400d1d6-4f1c-41eb-8956-632545125572" path="/var/lib/kubelet/pods/d400d1d6-4f1c-41eb-8956-632545125572/volumes" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.743132 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:45 crc kubenswrapper[4810]: I0110 06:52:45.756118 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:46 crc kubenswrapper[4810]: I0110 06:52:46.152247 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw"] Jan 10 06:52:46 crc kubenswrapper[4810]: W0110 06:52:46.160316 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaac6c8e6_4464_495a_b67e_bd9af8f6ff41.slice/crio-b1159959e0829923df102c417683f161aa46a8d07635d785e32b8894bee09981 WatchSource:0}: Error finding container b1159959e0829923df102c417683f161aa46a8d07635d785e32b8894bee09981: Status 404 returned error can't find the container with id b1159959e0829923df102c417683f161aa46a8d07635d785e32b8894bee09981 Jan 10 06:52:46 crc kubenswrapper[4810]: I0110 06:52:46.232245 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-968fc58dd-jxb6j"] Jan 10 06:52:46 crc kubenswrapper[4810]: I0110 06:52:46.906443 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" event={"ID":"aac6c8e6-4464-495a-b67e-bd9af8f6ff41","Type":"ContainerStarted","Data":"411819245b1c860a77daf211f5764b761105b5413e65fcfddf23488bd11ac419"} Jan 10 06:52:46 crc kubenswrapper[4810]: I0110 06:52:46.906719 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:46 crc kubenswrapper[4810]: I0110 06:52:46.906736 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" event={"ID":"aac6c8e6-4464-495a-b67e-bd9af8f6ff41","Type":"ContainerStarted","Data":"b1159959e0829923df102c417683f161aa46a8d07635d785e32b8894bee09981"} Jan 10 06:52:46 crc kubenswrapper[4810]: I0110 06:52:46.908487 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" event={"ID":"84388251-430b-48a1-841f-fbf93e379d3b","Type":"ContainerStarted","Data":"b3dd7e59f7878d90d9a1d64d24326d8a23ed76a0230dbf75e20f900554859913"} Jan 10 06:52:46 crc kubenswrapper[4810]: I0110 06:52:46.908531 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" event={"ID":"84388251-430b-48a1-841f-fbf93e379d3b","Type":"ContainerStarted","Data":"4d746235336fb1b65c05518702aa8cdf1016514ac14dfed1b899a5ece2e6acc6"} Jan 10 06:52:46 crc kubenswrapper[4810]: I0110 06:52:46.910235 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:46 crc kubenswrapper[4810]: I0110 06:52:46.914382 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" Jan 10 06:52:46 crc kubenswrapper[4810]: I0110 06:52:46.916748 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" Jan 10 06:52:46 crc kubenswrapper[4810]: I0110 06:52:46.973747 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f9f6bc6d4-cxqfw" podStartSLOduration=2.973729182 podStartE2EDuration="2.973729182s" podCreationTimestamp="2026-01-10 06:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:52:46.944511776 +0000 UTC m=+395.560004669" watchObservedRunningTime="2026-01-10 06:52:46.973729182 +0000 UTC m=+395.589222065" Jan 10 06:52:47 crc kubenswrapper[4810]: I0110 06:52:47.054788 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-968fc58dd-jxb6j" podStartSLOduration=3.054773848 podStartE2EDuration="3.054773848s" podCreationTimestamp="2026-01-10 06:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:52:46.976660783 +0000 UTC m=+395.592153666" watchObservedRunningTime="2026-01-10 06:52:47.054773848 +0000 UTC m=+395.670266731" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.476576 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6cvnc"] Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.479070 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.481516 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.489732 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cvnc"] Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.577110 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783ca84a-455c-4f14-8b77-4a9689c46026-catalog-content\") pod \"redhat-marketplace-6cvnc\" (UID: \"783ca84a-455c-4f14-8b77-4a9689c46026\") " pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.577236 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgmfs\" (UniqueName: \"kubernetes.io/projected/783ca84a-455c-4f14-8b77-4a9689c46026-kube-api-access-fgmfs\") pod \"redhat-marketplace-6cvnc\" (UID: \"783ca84a-455c-4f14-8b77-4a9689c46026\") " pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.577296 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783ca84a-455c-4f14-8b77-4a9689c46026-utilities\") pod \"redhat-marketplace-6cvnc\" (UID: \"783ca84a-455c-4f14-8b77-4a9689c46026\") " pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.678801 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783ca84a-455c-4f14-8b77-4a9689c46026-catalog-content\") pod \"redhat-marketplace-6cvnc\" (UID: \"783ca84a-455c-4f14-8b77-4a9689c46026\") " pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.679172 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgmfs\" (UniqueName: \"kubernetes.io/projected/783ca84a-455c-4f14-8b77-4a9689c46026-kube-api-access-fgmfs\") pod \"redhat-marketplace-6cvnc\" (UID: \"783ca84a-455c-4f14-8b77-4a9689c46026\") " pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.679241 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783ca84a-455c-4f14-8b77-4a9689c46026-utilities\") pod \"redhat-marketplace-6cvnc\" (UID: \"783ca84a-455c-4f14-8b77-4a9689c46026\") " pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.679741 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783ca84a-455c-4f14-8b77-4a9689c46026-utilities\") pod \"redhat-marketplace-6cvnc\" (UID: \"783ca84a-455c-4f14-8b77-4a9689c46026\") " pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.679774 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783ca84a-455c-4f14-8b77-4a9689c46026-catalog-content\") pod \"redhat-marketplace-6cvnc\" (UID: \"783ca84a-455c-4f14-8b77-4a9689c46026\") " pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.720684 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgmfs\" (UniqueName: \"kubernetes.io/projected/783ca84a-455c-4f14-8b77-4a9689c46026-kube-api-access-fgmfs\") pod \"redhat-marketplace-6cvnc\" (UID: \"783ca84a-455c-4f14-8b77-4a9689c46026\") " pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:49 crc kubenswrapper[4810]: I0110 06:52:49.811348 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:50.577999 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hcfnc" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:50.651256 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nqtg7"] Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:50.882849 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:50.883159 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.272793 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-db5l6"] Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.274991 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.277449 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.283263 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-db5l6"] Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.429137 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qnfp\" (UniqueName: \"kubernetes.io/projected/678a3ed9-5d4e-493f-9be6-fad386a36363-kube-api-access-4qnfp\") pod \"certified-operators-db5l6\" (UID: \"678a3ed9-5d4e-493f-9be6-fad386a36363\") " pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.429184 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678a3ed9-5d4e-493f-9be6-fad386a36363-catalog-content\") pod \"certified-operators-db5l6\" (UID: \"678a3ed9-5d4e-493f-9be6-fad386a36363\") " pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.429234 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678a3ed9-5d4e-493f-9be6-fad386a36363-utilities\") pod \"certified-operators-db5l6\" (UID: \"678a3ed9-5d4e-493f-9be6-fad386a36363\") " pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.531851 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qnfp\" (UniqueName: \"kubernetes.io/projected/678a3ed9-5d4e-493f-9be6-fad386a36363-kube-api-access-4qnfp\") pod \"certified-operators-db5l6\" (UID: \"678a3ed9-5d4e-493f-9be6-fad386a36363\") " pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.531908 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678a3ed9-5d4e-493f-9be6-fad386a36363-catalog-content\") pod \"certified-operators-db5l6\" (UID: \"678a3ed9-5d4e-493f-9be6-fad386a36363\") " pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.531933 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678a3ed9-5d4e-493f-9be6-fad386a36363-utilities\") pod \"certified-operators-db5l6\" (UID: \"678a3ed9-5d4e-493f-9be6-fad386a36363\") " pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.533697 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678a3ed9-5d4e-493f-9be6-fad386a36363-utilities\") pod \"certified-operators-db5l6\" (UID: \"678a3ed9-5d4e-493f-9be6-fad386a36363\") " pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.535744 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678a3ed9-5d4e-493f-9be6-fad386a36363-catalog-content\") pod \"certified-operators-db5l6\" (UID: \"678a3ed9-5d4e-493f-9be6-fad386a36363\") " pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.565072 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qnfp\" (UniqueName: \"kubernetes.io/projected/678a3ed9-5d4e-493f-9be6-fad386a36363-kube-api-access-4qnfp\") pod \"certified-operators-db5l6\" (UID: \"678a3ed9-5d4e-493f-9be6-fad386a36363\") " pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.632613 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.839822 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6cvnc"] Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.868351 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2n6cp"] Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.869639 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.872020 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.883511 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2n6cp"] Jan 10 06:52:51 crc kubenswrapper[4810]: I0110 06:52:51.949461 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cvnc" event={"ID":"783ca84a-455c-4f14-8b77-4a9689c46026","Type":"ContainerStarted","Data":"1fdfbd80de97116fcca4dcc1a80139b34133904a36ddac00c708e098117b8f41"} Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.038247 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a578a2-8b7d-4d6d-94bd-a258853f79a2-utilities\") pod \"redhat-operators-2n6cp\" (UID: \"61a578a2-8b7d-4d6d-94bd-a258853f79a2\") " pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.038382 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a578a2-8b7d-4d6d-94bd-a258853f79a2-catalog-content\") pod \"redhat-operators-2n6cp\" (UID: \"61a578a2-8b7d-4d6d-94bd-a258853f79a2\") " pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.038440 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69kww\" (UniqueName: \"kubernetes.io/projected/61a578a2-8b7d-4d6d-94bd-a258853f79a2-kube-api-access-69kww\") pod \"redhat-operators-2n6cp\" (UID: \"61a578a2-8b7d-4d6d-94bd-a258853f79a2\") " pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.070804 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-db5l6"] Jan 10 06:52:52 crc kubenswrapper[4810]: W0110 06:52:52.100119 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod678a3ed9_5d4e_493f_9be6_fad386a36363.slice/crio-810f6c7af9c2c4935bfa8a427eb203e03228f3c969687f3c8b967f65a2a68076 WatchSource:0}: Error finding container 810f6c7af9c2c4935bfa8a427eb203e03228f3c969687f3c8b967f65a2a68076: Status 404 returned error can't find the container with id 810f6c7af9c2c4935bfa8a427eb203e03228f3c969687f3c8b967f65a2a68076 Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.140806 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a578a2-8b7d-4d6d-94bd-a258853f79a2-utilities\") pod \"redhat-operators-2n6cp\" (UID: \"61a578a2-8b7d-4d6d-94bd-a258853f79a2\") " pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.140872 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a578a2-8b7d-4d6d-94bd-a258853f79a2-catalog-content\") pod \"redhat-operators-2n6cp\" (UID: \"61a578a2-8b7d-4d6d-94bd-a258853f79a2\") " pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.140898 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69kww\" (UniqueName: \"kubernetes.io/projected/61a578a2-8b7d-4d6d-94bd-a258853f79a2-kube-api-access-69kww\") pod \"redhat-operators-2n6cp\" (UID: \"61a578a2-8b7d-4d6d-94bd-a258853f79a2\") " pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.143453 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a578a2-8b7d-4d6d-94bd-a258853f79a2-utilities\") pod \"redhat-operators-2n6cp\" (UID: \"61a578a2-8b7d-4d6d-94bd-a258853f79a2\") " pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.143927 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a578a2-8b7d-4d6d-94bd-a258853f79a2-catalog-content\") pod \"redhat-operators-2n6cp\" (UID: \"61a578a2-8b7d-4d6d-94bd-a258853f79a2\") " pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.173824 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69kww\" (UniqueName: \"kubernetes.io/projected/61a578a2-8b7d-4d6d-94bd-a258853f79a2-kube-api-access-69kww\") pod \"redhat-operators-2n6cp\" (UID: \"61a578a2-8b7d-4d6d-94bd-a258853f79a2\") " pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.195604 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.635222 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2n6cp"] Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.957620 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a578a2-8b7d-4d6d-94bd-a258853f79a2" containerID="93db54bece922e07f919bce945eb29c002e3a4982d14da13823408b191817f27" exitCode=0 Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.957707 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2n6cp" event={"ID":"61a578a2-8b7d-4d6d-94bd-a258853f79a2","Type":"ContainerDied","Data":"93db54bece922e07f919bce945eb29c002e3a4982d14da13823408b191817f27"} Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.958140 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2n6cp" event={"ID":"61a578a2-8b7d-4d6d-94bd-a258853f79a2","Type":"ContainerStarted","Data":"49a50d24b70331a8fe89b51171f7734a1eacbe929bcf4a32a354989e0a47a7b6"} Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.960471 4810 generic.go:334] "Generic (PLEG): container finished" podID="783ca84a-455c-4f14-8b77-4a9689c46026" containerID="0752df39f393354f35235b5b5b458f041e84b0f7ce18103bb7fad1dbfdfc1d1a" exitCode=0 Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.960605 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cvnc" event={"ID":"783ca84a-455c-4f14-8b77-4a9689c46026","Type":"ContainerDied","Data":"0752df39f393354f35235b5b5b458f041e84b0f7ce18103bb7fad1dbfdfc1d1a"} Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.963151 4810 generic.go:334] "Generic (PLEG): container finished" podID="678a3ed9-5d4e-493f-9be6-fad386a36363" containerID="1c672311b5235fa74fef04b9e201c96a354d39caf3ba31a730e9937787cd6251" exitCode=0 Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.963260 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db5l6" event={"ID":"678a3ed9-5d4e-493f-9be6-fad386a36363","Type":"ContainerDied","Data":"1c672311b5235fa74fef04b9e201c96a354d39caf3ba31a730e9937787cd6251"} Jan 10 06:52:52 crc kubenswrapper[4810]: I0110 06:52:52.963329 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db5l6" event={"ID":"678a3ed9-5d4e-493f-9be6-fad386a36363","Type":"ContainerStarted","Data":"810f6c7af9c2c4935bfa8a427eb203e03228f3c969687f3c8b967f65a2a68076"} Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.671073 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fblgv"] Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.672726 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.675567 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.678998 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fblgv"] Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.770867 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhmk\" (UniqueName: \"kubernetes.io/projected/98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92-kube-api-access-rnhmk\") pod \"community-operators-fblgv\" (UID: \"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92\") " pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.770931 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92-utilities\") pod \"community-operators-fblgv\" (UID: \"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92\") " pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.770976 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92-catalog-content\") pod \"community-operators-fblgv\" (UID: \"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92\") " pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.871978 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnhmk\" (UniqueName: \"kubernetes.io/projected/98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92-kube-api-access-rnhmk\") pod \"community-operators-fblgv\" (UID: \"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92\") " pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.872055 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92-utilities\") pod \"community-operators-fblgv\" (UID: \"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92\") " pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.872104 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92-catalog-content\") pod \"community-operators-fblgv\" (UID: \"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92\") " pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.872931 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92-catalog-content\") pod \"community-operators-fblgv\" (UID: \"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92\") " pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.872971 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92-utilities\") pod \"community-operators-fblgv\" (UID: \"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92\") " pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.905276 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnhmk\" (UniqueName: \"kubernetes.io/projected/98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92-kube-api-access-rnhmk\") pod \"community-operators-fblgv\" (UID: \"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92\") " pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.969800 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db5l6" event={"ID":"678a3ed9-5d4e-493f-9be6-fad386a36363","Type":"ContainerStarted","Data":"ace26cf7c70af39eef90c9b9a7d347c9f47eaff8be36e06a2de6c6cc2021ccf6"} Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.972219 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2n6cp" event={"ID":"61a578a2-8b7d-4d6d-94bd-a258853f79a2","Type":"ContainerStarted","Data":"2f509b35a8380bcd5af137048b398025901757e035c95e544edeccc9aba8ecfe"} Jan 10 06:52:53 crc kubenswrapper[4810]: I0110 06:52:53.974234 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cvnc" event={"ID":"783ca84a-455c-4f14-8b77-4a9689c46026","Type":"ContainerStarted","Data":"5d1c7b29f545d44a620805e530e7d769e6ada3799e0961ba3eda7b58a2984b9d"} Jan 10 06:52:54 crc kubenswrapper[4810]: I0110 06:52:54.098254 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:52:54 crc kubenswrapper[4810]: I0110 06:52:54.512381 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fblgv"] Jan 10 06:52:54 crc kubenswrapper[4810]: I0110 06:52:54.986133 4810 generic.go:334] "Generic (PLEG): container finished" podID="678a3ed9-5d4e-493f-9be6-fad386a36363" containerID="ace26cf7c70af39eef90c9b9a7d347c9f47eaff8be36e06a2de6c6cc2021ccf6" exitCode=0 Jan 10 06:52:54 crc kubenswrapper[4810]: I0110 06:52:54.986223 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db5l6" event={"ID":"678a3ed9-5d4e-493f-9be6-fad386a36363","Type":"ContainerDied","Data":"ace26cf7c70af39eef90c9b9a7d347c9f47eaff8be36e06a2de6c6cc2021ccf6"} Jan 10 06:52:54 crc kubenswrapper[4810]: I0110 06:52:54.989765 4810 generic.go:334] "Generic (PLEG): container finished" podID="98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92" containerID="98249bc6d50d2537c68262c6e7fc0d86e42e355be60806fec98a08097eb949a8" exitCode=0 Jan 10 06:52:54 crc kubenswrapper[4810]: I0110 06:52:54.989908 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fblgv" event={"ID":"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92","Type":"ContainerDied","Data":"98249bc6d50d2537c68262c6e7fc0d86e42e355be60806fec98a08097eb949a8"} Jan 10 06:52:54 crc kubenswrapper[4810]: I0110 06:52:54.989948 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fblgv" event={"ID":"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92","Type":"ContainerStarted","Data":"584e9d87d36b295415b82e07eba91a5493dca3427f78f719e96608036445c7ae"} Jan 10 06:52:54 crc kubenswrapper[4810]: I0110 06:52:54.998099 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a578a2-8b7d-4d6d-94bd-a258853f79a2" containerID="2f509b35a8380bcd5af137048b398025901757e035c95e544edeccc9aba8ecfe" exitCode=0 Jan 10 06:52:54 crc kubenswrapper[4810]: I0110 06:52:54.998238 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2n6cp" event={"ID":"61a578a2-8b7d-4d6d-94bd-a258853f79a2","Type":"ContainerDied","Data":"2f509b35a8380bcd5af137048b398025901757e035c95e544edeccc9aba8ecfe"} Jan 10 06:52:55 crc kubenswrapper[4810]: I0110 06:52:55.007026 4810 generic.go:334] "Generic (PLEG): container finished" podID="783ca84a-455c-4f14-8b77-4a9689c46026" containerID="5d1c7b29f545d44a620805e530e7d769e6ada3799e0961ba3eda7b58a2984b9d" exitCode=0 Jan 10 06:52:55 crc kubenswrapper[4810]: I0110 06:52:55.007093 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cvnc" event={"ID":"783ca84a-455c-4f14-8b77-4a9689c46026","Type":"ContainerDied","Data":"5d1c7b29f545d44a620805e530e7d769e6ada3799e0961ba3eda7b58a2984b9d"} Jan 10 06:52:56 crc kubenswrapper[4810]: I0110 06:52:56.018676 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6cvnc" event={"ID":"783ca84a-455c-4f14-8b77-4a9689c46026","Type":"ContainerStarted","Data":"dc028f550507f97c2892eeb6854ec1d384f7301086d909e09829aa0ffd3f9028"} Jan 10 06:52:56 crc kubenswrapper[4810]: I0110 06:52:56.022071 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-db5l6" event={"ID":"678a3ed9-5d4e-493f-9be6-fad386a36363","Type":"ContainerStarted","Data":"8dbc35e56e067393ce6387446ad1d481df20592397e0190281a4a81a6a143668"} Jan 10 06:52:56 crc kubenswrapper[4810]: I0110 06:52:56.024825 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fblgv" event={"ID":"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92","Type":"ContainerStarted","Data":"e01ec0233b30e1976169afda540dfb9bb74847e24039e42ccae771bcf4d46fcf"} Jan 10 06:52:56 crc kubenswrapper[4810]: I0110 06:52:56.026943 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2n6cp" event={"ID":"61a578a2-8b7d-4d6d-94bd-a258853f79a2","Type":"ContainerStarted","Data":"a0182d66582d58aac4ac89db90b087d88c85f6dae0cfc750856de798cfaf8318"} Jan 10 06:52:56 crc kubenswrapper[4810]: I0110 06:52:56.047335 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6cvnc" podStartSLOduration=4.427429158 podStartE2EDuration="7.047314663s" podCreationTimestamp="2026-01-10 06:52:49 +0000 UTC" firstStartedPulling="2026-01-10 06:52:52.962327273 +0000 UTC m=+401.577820206" lastFinishedPulling="2026-01-10 06:52:55.582212828 +0000 UTC m=+404.197705711" observedRunningTime="2026-01-10 06:52:56.042808066 +0000 UTC m=+404.658300959" watchObservedRunningTime="2026-01-10 06:52:56.047314663 +0000 UTC m=+404.662807546" Jan 10 06:52:56 crc kubenswrapper[4810]: I0110 06:52:56.094764 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2n6cp" podStartSLOduration=2.496035667 podStartE2EDuration="5.094740136s" podCreationTimestamp="2026-01-10 06:52:51 +0000 UTC" firstStartedPulling="2026-01-10 06:52:52.960318015 +0000 UTC m=+401.575810908" lastFinishedPulling="2026-01-10 06:52:55.559022494 +0000 UTC m=+404.174515377" observedRunningTime="2026-01-10 06:52:56.092081552 +0000 UTC m=+404.707574435" watchObservedRunningTime="2026-01-10 06:52:56.094740136 +0000 UTC m=+404.710233029" Jan 10 06:52:56 crc kubenswrapper[4810]: I0110 06:52:56.114181 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-db5l6" podStartSLOduration=2.529420824 podStartE2EDuration="5.1141638s" podCreationTimestamp="2026-01-10 06:52:51 +0000 UTC" firstStartedPulling="2026-01-10 06:52:52.965076819 +0000 UTC m=+401.580569742" lastFinishedPulling="2026-01-10 06:52:55.549819835 +0000 UTC m=+404.165312718" observedRunningTime="2026-01-10 06:52:56.110614834 +0000 UTC m=+404.726107717" watchObservedRunningTime="2026-01-10 06:52:56.1141638 +0000 UTC m=+404.729656683" Jan 10 06:52:57 crc kubenswrapper[4810]: I0110 06:52:57.033689 4810 generic.go:334] "Generic (PLEG): container finished" podID="98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92" containerID="e01ec0233b30e1976169afda540dfb9bb74847e24039e42ccae771bcf4d46fcf" exitCode=0 Jan 10 06:52:57 crc kubenswrapper[4810]: I0110 06:52:57.033738 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fblgv" event={"ID":"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92","Type":"ContainerDied","Data":"e01ec0233b30e1976169afda540dfb9bb74847e24039e42ccae771bcf4d46fcf"} Jan 10 06:52:58 crc kubenswrapper[4810]: I0110 06:52:58.041488 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fblgv" event={"ID":"98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92","Type":"ContainerStarted","Data":"f3b62ca7fd763f9fa7e49d98f95ae4fa1d9ddc5ec987699946729d63bb995879"} Jan 10 06:52:58 crc kubenswrapper[4810]: I0110 06:52:58.064875 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fblgv" podStartSLOduration=2.562533099 podStartE2EDuration="5.064838996s" podCreationTimestamp="2026-01-10 06:52:53 +0000 UTC" firstStartedPulling="2026-01-10 06:52:54.991849062 +0000 UTC m=+403.607341975" lastFinishedPulling="2026-01-10 06:52:57.494154989 +0000 UTC m=+406.109647872" observedRunningTime="2026-01-10 06:52:58.057876419 +0000 UTC m=+406.673369312" watchObservedRunningTime="2026-01-10 06:52:58.064838996 +0000 UTC m=+406.680331929" Jan 10 06:52:59 crc kubenswrapper[4810]: I0110 06:52:59.812142 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:59 crc kubenswrapper[4810]: I0110 06:52:59.812214 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:52:59 crc kubenswrapper[4810]: I0110 06:52:59.885565 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:53:00 crc kubenswrapper[4810]: I0110 06:53:00.098460 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6cvnc" Jan 10 06:53:01 crc kubenswrapper[4810]: I0110 06:53:01.633747 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:53:01 crc kubenswrapper[4810]: I0110 06:53:01.634175 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:53:01 crc kubenswrapper[4810]: I0110 06:53:01.690874 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:53:02 crc kubenswrapper[4810]: I0110 06:53:02.123549 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-db5l6" Jan 10 06:53:02 crc kubenswrapper[4810]: I0110 06:53:02.195908 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:53:02 crc kubenswrapper[4810]: I0110 06:53:02.195949 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:53:02 crc kubenswrapper[4810]: I0110 06:53:02.247963 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:53:03 crc kubenswrapper[4810]: I0110 06:53:03.104717 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2n6cp" Jan 10 06:53:04 crc kubenswrapper[4810]: I0110 06:53:04.099182 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:53:04 crc kubenswrapper[4810]: I0110 06:53:04.100316 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:53:04 crc kubenswrapper[4810]: I0110 06:53:04.145186 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:53:05 crc kubenswrapper[4810]: I0110 06:53:05.171321 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fblgv" Jan 10 06:53:15 crc kubenswrapper[4810]: I0110 06:53:15.695682 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" podUID="f8d04d07-f61c-4c6c-9038-7cca9d199ede" containerName="registry" containerID="cri-o://ea6b03e99481b35195ac9d8a10f516eb2006803b1df41d240877f0640643b0ee" gracePeriod=30 Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.213826 4810 generic.go:334] "Generic (PLEG): container finished" podID="f8d04d07-f61c-4c6c-9038-7cca9d199ede" containerID="ea6b03e99481b35195ac9d8a10f516eb2006803b1df41d240877f0640643b0ee" exitCode=0 Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.213950 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" event={"ID":"f8d04d07-f61c-4c6c-9038-7cca9d199ede","Type":"ContainerDied","Data":"ea6b03e99481b35195ac9d8a10f516eb2006803b1df41d240877f0640643b0ee"} Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.723050 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.855037 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8d04d07-f61c-4c6c-9038-7cca9d199ede-ca-trust-extracted\") pod \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.855123 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-bound-sa-token\") pod \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.855179 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-tls\") pod \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.855252 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-trusted-ca\") pod \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.855281 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr56n\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-kube-api-access-sr56n\") pod \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.855305 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-certificates\") pod \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.855339 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8d04d07-f61c-4c6c-9038-7cca9d199ede-installation-pull-secrets\") pod \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.855581 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\" (UID: \"f8d04d07-f61c-4c6c-9038-7cca9d199ede\") " Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.856802 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f8d04d07-f61c-4c6c-9038-7cca9d199ede" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.857819 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f8d04d07-f61c-4c6c-9038-7cca9d199ede" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.863299 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f8d04d07-f61c-4c6c-9038-7cca9d199ede" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.863338 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8d04d07-f61c-4c6c-9038-7cca9d199ede-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f8d04d07-f61c-4c6c-9038-7cca9d199ede" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.864235 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-kube-api-access-sr56n" (OuterVolumeSpecName: "kube-api-access-sr56n") pod "f8d04d07-f61c-4c6c-9038-7cca9d199ede" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede"). InnerVolumeSpecName "kube-api-access-sr56n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.871073 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f8d04d07-f61c-4c6c-9038-7cca9d199ede" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.877522 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8d04d07-f61c-4c6c-9038-7cca9d199ede-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f8d04d07-f61c-4c6c-9038-7cca9d199ede" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.879349 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f8d04d07-f61c-4c6c-9038-7cca9d199ede" (UID: "f8d04d07-f61c-4c6c-9038-7cca9d199ede"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.958269 4810 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.958751 4810 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.959171 4810 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.959383 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr56n\" (UniqueName: \"kubernetes.io/projected/f8d04d07-f61c-4c6c-9038-7cca9d199ede-kube-api-access-sr56n\") on node \"crc\" DevicePath \"\"" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.959507 4810 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f8d04d07-f61c-4c6c-9038-7cca9d199ede-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.959638 4810 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f8d04d07-f61c-4c6c-9038-7cca9d199ede-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 10 06:53:18 crc kubenswrapper[4810]: I0110 06:53:18.959796 4810 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f8d04d07-f61c-4c6c-9038-7cca9d199ede-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 10 06:53:19 crc kubenswrapper[4810]: I0110 06:53:19.221595 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" event={"ID":"f8d04d07-f61c-4c6c-9038-7cca9d199ede","Type":"ContainerDied","Data":"f3a82cad5c7daf1d945f300fb0192a8cbfa83267deae96028446f1789e39ae0b"} Jan 10 06:53:19 crc kubenswrapper[4810]: I0110 06:53:19.221672 4810 scope.go:117] "RemoveContainer" containerID="ea6b03e99481b35195ac9d8a10f516eb2006803b1df41d240877f0640643b0ee" Jan 10 06:53:19 crc kubenswrapper[4810]: I0110 06:53:19.221719 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nqtg7" Jan 10 06:53:19 crc kubenswrapper[4810]: I0110 06:53:19.274625 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nqtg7"] Jan 10 06:53:19 crc kubenswrapper[4810]: I0110 06:53:19.282900 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nqtg7"] Jan 10 06:53:19 crc kubenswrapper[4810]: I0110 06:53:19.705288 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8d04d07-f61c-4c6c-9038-7cca9d199ede" path="/var/lib/kubelet/pods/f8d04d07-f61c-4c6c-9038-7cca9d199ede/volumes" Jan 10 06:53:20 crc kubenswrapper[4810]: I0110 06:53:20.883131 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 06:53:20 crc kubenswrapper[4810]: I0110 06:53:20.883277 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 06:53:20 crc kubenswrapper[4810]: I0110 06:53:20.883345 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:53:20 crc kubenswrapper[4810]: I0110 06:53:20.884267 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fed5cc11661bddfe784aa76be4300a8d8ea3e2ff81f1fb4536922a245a7ce154"} pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 10 06:53:20 crc kubenswrapper[4810]: I0110 06:53:20.884374 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" containerID="cri-o://fed5cc11661bddfe784aa76be4300a8d8ea3e2ff81f1fb4536922a245a7ce154" gracePeriod=600 Jan 10 06:53:21 crc kubenswrapper[4810]: I0110 06:53:21.238813 4810 generic.go:334] "Generic (PLEG): container finished" podID="b5b79429-9259-412f-bab8-27865ab7029b" containerID="fed5cc11661bddfe784aa76be4300a8d8ea3e2ff81f1fb4536922a245a7ce154" exitCode=0 Jan 10 06:53:21 crc kubenswrapper[4810]: I0110 06:53:21.238866 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerDied","Data":"fed5cc11661bddfe784aa76be4300a8d8ea3e2ff81f1fb4536922a245a7ce154"} Jan 10 06:53:21 crc kubenswrapper[4810]: I0110 06:53:21.238943 4810 scope.go:117] "RemoveContainer" containerID="936c73ef27f06628346dc70fa714a322bfa97b4dce4a0ec5daa0183bb8c8f208" Jan 10 06:53:22 crc kubenswrapper[4810]: I0110 06:53:22.248363 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"df6bb1628e4de8d40f9cf46b93e9cff42db4028c710f0012c0d053532cc05941"} Jan 10 06:55:50 crc kubenswrapper[4810]: I0110 06:55:50.883084 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 06:55:50 crc kubenswrapper[4810]: I0110 06:55:50.883731 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 06:56:20 crc kubenswrapper[4810]: I0110 06:56:20.883156 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 06:56:20 crc kubenswrapper[4810]: I0110 06:56:20.883819 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 06:56:50 crc kubenswrapper[4810]: I0110 06:56:50.882966 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 06:56:50 crc kubenswrapper[4810]: I0110 06:56:50.883594 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 06:56:50 crc kubenswrapper[4810]: I0110 06:56:50.883642 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 06:56:50 crc kubenswrapper[4810]: I0110 06:56:50.884209 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df6bb1628e4de8d40f9cf46b93e9cff42db4028c710f0012c0d053532cc05941"} pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 10 06:56:50 crc kubenswrapper[4810]: I0110 06:56:50.884269 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" containerID="cri-o://df6bb1628e4de8d40f9cf46b93e9cff42db4028c710f0012c0d053532cc05941" gracePeriod=600 Jan 10 06:56:51 crc kubenswrapper[4810]: I0110 06:56:51.688777 4810 generic.go:334] "Generic (PLEG): container finished" podID="b5b79429-9259-412f-bab8-27865ab7029b" containerID="df6bb1628e4de8d40f9cf46b93e9cff42db4028c710f0012c0d053532cc05941" exitCode=0 Jan 10 06:56:51 crc kubenswrapper[4810]: I0110 06:56:51.688853 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerDied","Data":"df6bb1628e4de8d40f9cf46b93e9cff42db4028c710f0012c0d053532cc05941"} Jan 10 06:56:51 crc kubenswrapper[4810]: I0110 06:56:51.689368 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"15f41bc02b9f771423fbf604f7e443b9696eeab6c3ab3a52722c60140661e2b7"} Jan 10 06:56:51 crc kubenswrapper[4810]: I0110 06:56:51.689405 4810 scope.go:117] "RemoveContainer" containerID="fed5cc11661bddfe784aa76be4300a8d8ea3e2ff81f1fb4536922a245a7ce154" Jan 10 06:57:30 crc kubenswrapper[4810]: I0110 06:57:30.373719 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t4zqb"] Jan 10 06:57:30 crc kubenswrapper[4810]: I0110 06:57:30.375072 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovn-controller" containerID="cri-o://f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a" gracePeriod=30 Jan 10 06:57:30 crc kubenswrapper[4810]: I0110 06:57:30.375142 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="nbdb" containerID="cri-o://37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775" gracePeriod=30 Jan 10 06:57:30 crc kubenswrapper[4810]: I0110 06:57:30.375282 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="northd" containerID="cri-o://24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2" gracePeriod=30 Jan 10 06:57:30 crc kubenswrapper[4810]: I0110 06:57:30.375367 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovn-acl-logging" containerID="cri-o://f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779" gracePeriod=30 Jan 10 06:57:30 crc kubenswrapper[4810]: I0110 06:57:30.375460 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520" gracePeriod=30 Jan 10 06:57:30 crc kubenswrapper[4810]: I0110 06:57:30.375620 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="sbdb" containerID="cri-o://10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c" gracePeriod=30 Jan 10 06:57:30 crc kubenswrapper[4810]: I0110 06:57:30.375723 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="kube-rbac-proxy-node" containerID="cri-o://acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab" gracePeriod=30 Jan 10 06:57:30 crc kubenswrapper[4810]: I0110 06:57:30.460558 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" containerID="cri-o://f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833" gracePeriod=30 Jan 10 06:57:31 crc kubenswrapper[4810]: I0110 06:57:31.965275 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/3.log" Jan 10 06:57:31 crc kubenswrapper[4810]: I0110 06:57:31.969658 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovn-acl-logging/0.log" Jan 10 06:57:31 crc kubenswrapper[4810]: I0110 06:57:31.970894 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779" exitCode=143 Jan 10 06:57:31 crc kubenswrapper[4810]: I0110 06:57:31.970954 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779"} Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.543743 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c is running failed: container process not found" containerID="10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.543822 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775 is running failed: container process not found" containerID="37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.544469 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c is running failed: container process not found" containerID="10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.544503 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775 is running failed: container process not found" containerID="37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.545046 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775 is running failed: container process not found" containerID="37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.545098 4810 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="nbdb" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.545309 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c is running failed: container process not found" containerID="10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.545374 4810 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="sbdb" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.779389 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/3.log" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.782342 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovn-acl-logging/0.log" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.782881 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovn-controller/0.log" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.783564 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.854956 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fhv9b"] Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855173 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855187 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855217 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855225 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855237 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855245 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855255 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855262 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855272 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855279 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855289 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="sbdb" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855296 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="sbdb" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855306 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="kube-rbac-proxy-ovn-metrics" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855314 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="kube-rbac-proxy-ovn-metrics" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855323 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovn-acl-logging" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855331 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovn-acl-logging" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855344 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="kube-rbac-proxy-node" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855353 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="kube-rbac-proxy-node" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855364 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="northd" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855372 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="northd" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855382 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="kubecfg-setup" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855389 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="kubecfg-setup" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855399 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovn-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855407 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovn-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855419 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="nbdb" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855426 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="nbdb" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.855438 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8d04d07-f61c-4c6c-9038-7cca9d199ede" containerName="registry" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855446 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8d04d07-f61c-4c6c-9038-7cca9d199ede" containerName="registry" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855567 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovn-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855578 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="kube-rbac-proxy-node" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855590 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="sbdb" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855598 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855609 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovn-acl-logging" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855621 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855630 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8d04d07-f61c-4c6c-9038-7cca9d199ede" containerName="registry" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855640 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="northd" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855649 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855659 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="nbdb" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855669 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855679 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="kube-rbac-proxy-ovn-metrics" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.855930 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce51084-e094-437c-a988-66b17982fd5d" containerName="ovnkube-controller" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.858152 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932123 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-env-overrides\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932372 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-etc-openvswitch\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932455 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-bin\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932416 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932574 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-netns\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932644 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/dce51084-e094-437c-a988-66b17982fd5d-kube-api-access-55f7t\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932713 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-log-socket\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932773 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-node-log\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932835 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-systemd\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932933 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dce51084-e094-437c-a988-66b17982fd5d-ovn-node-metrics-cert\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933012 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-script-lib\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933098 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933164 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-ovn\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933245 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-openvswitch\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933312 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-ovn-kubernetes\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933379 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-config\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933441 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-kubelet\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933513 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-var-lib-openvswitch\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933572 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-slash\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933662 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-netd\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933752 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-systemd-units\") pod \"dce51084-e094-437c-a988-66b17982fd5d\" (UID: \"dce51084-e094-437c-a988-66b17982fd5d\") " Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932603 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932617 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932742 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-log-socket" (OuterVolumeSpecName: "log-socket") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932682 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.932902 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-node-log" (OuterVolumeSpecName: "node-log") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933301 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933343 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933336 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933359 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933593 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933625 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-slash" (OuterVolumeSpecName: "host-slash") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933628 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933739 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933727 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933775 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.933985 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.934619 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-cni-netd\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.934713 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-run-ovn\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.934800 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.935041 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e830f9d6-ce70-4241-afcf-11ea7b532c6c-ovnkube-config\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.935131 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-run-netns\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.935231 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-log-socket\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.935317 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-systemd-units\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.935399 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws6mn\" (UniqueName: \"kubernetes.io/projected/e830f9d6-ce70-4241-afcf-11ea7b532c6c-kube-api-access-ws6mn\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.935475 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-etc-openvswitch\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.935548 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-run-openvswitch\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.935636 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.935734 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-slash\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.935827 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-kubelet\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.935916 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e830f9d6-ce70-4241-afcf-11ea7b532c6c-env-overrides\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.936029 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e830f9d6-ce70-4241-afcf-11ea7b532c6c-ovnkube-script-lib\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.936290 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-cni-bin\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.936402 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e830f9d6-ce70-4241-afcf-11ea7b532c6c-ovn-node-metrics-cert\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.936498 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-var-lib-openvswitch\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.936592 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-run-systemd\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.936677 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-node-log\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.936786 4810 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937028 4810 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937092 4810 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937176 4810 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-log-socket\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937267 4810 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-node-log\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937355 4810 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937433 4810 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937511 4810 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937577 4810 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937660 4810 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937729 4810 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937802 4810 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937867 4810 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.937985 4810 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-slash\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.938055 4810 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.938125 4810 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.946109 4810 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dce51084-e094-437c-a988-66b17982fd5d-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.942273 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce51084-e094-437c-a988-66b17982fd5d-kube-api-access-55f7t" (OuterVolumeSpecName: "kube-api-access-55f7t") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "kube-api-access-55f7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.942581 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dce51084-e094-437c-a988-66b17982fd5d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.963137 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "dce51084-e094-437c-a988-66b17982fd5d" (UID: "dce51084-e094-437c-a988-66b17982fd5d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.980009 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovnkube-controller/3.log" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.983431 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovn-acl-logging/0.log" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.983911 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-t4zqb_dce51084-e094-437c-a988-66b17982fd5d/ovn-controller/0.log" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.984416 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833" exitCode=0 Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.984497 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c" exitCode=0 Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.984561 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775" exitCode=0 Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.984620 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2" exitCode=0 Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.984676 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520" exitCode=0 Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.984723 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab" exitCode=0 Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.984779 4810 generic.go:334] "Generic (PLEG): container finished" podID="dce51084-e094-437c-a988-66b17982fd5d" containerID="f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a" exitCode=143 Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.984563 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.984494 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985041 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985059 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985070 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985079 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985089 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985098 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985107 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985112 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985117 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985122 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985127 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985131 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985136 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985141 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985148 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985156 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985161 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985167 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985172 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985176 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985181 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985186 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985208 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985215 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985221 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985230 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t4zqb" event={"ID":"dce51084-e094-437c-a988-66b17982fd5d","Type":"ContainerDied","Data":"6b7542e1b9f9cf82c1c9591d2fcfc6a66c34419eccc8b54cd40234db5c0676c4"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985240 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985247 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985253 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985259 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985265 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985273 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985279 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985285 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985291 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985297 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.985311 4810 scope.go:117] "RemoveContainer" containerID="f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.986949 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7gh2_34d87e8a-cdfb-46ed-97db-2d07cffec516/kube-multus/2.log" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.987736 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7gh2_34d87e8a-cdfb-46ed-97db-2d07cffec516/kube-multus/1.log" Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.987812 4810 generic.go:334] "Generic (PLEG): container finished" podID="34d87e8a-cdfb-46ed-97db-2d07cffec516" containerID="119ef4f78bf3e22db81c778c1d3560cd9a432d4c452b44055df285c9a18fa3f0" exitCode=2 Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.987858 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7gh2" event={"ID":"34d87e8a-cdfb-46ed-97db-2d07cffec516","Type":"ContainerDied","Data":"119ef4f78bf3e22db81c778c1d3560cd9a432d4c452b44055df285c9a18fa3f0"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.987891 4810 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94"} Jan 10 06:57:32 crc kubenswrapper[4810]: I0110 06:57:32.988646 4810 scope.go:117] "RemoveContainer" containerID="119ef4f78bf3e22db81c778c1d3560cd9a432d4c452b44055df285c9a18fa3f0" Jan 10 06:57:32 crc kubenswrapper[4810]: E0110 06:57:32.989113 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-t7gh2_openshift-multus(34d87e8a-cdfb-46ed-97db-2d07cffec516)\"" pod="openshift-multus/multus-t7gh2" podUID="34d87e8a-cdfb-46ed-97db-2d07cffec516" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.015912 4810 scope.go:117] "RemoveContainer" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.034796 4810 scope.go:117] "RemoveContainer" containerID="10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048497 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048545 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e830f9d6-ce70-4241-afcf-11ea7b532c6c-ovnkube-config\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048586 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-run-netns\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048608 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-log-socket\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048614 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048629 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-systemd-units\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048679 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-systemd-units\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048740 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-run-netns\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048770 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-log-socket\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048813 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws6mn\" (UniqueName: \"kubernetes.io/projected/e830f9d6-ce70-4241-afcf-11ea7b532c6c-kube-api-access-ws6mn\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048862 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-run-openvswitch\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048894 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048931 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-etc-openvswitch\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.048987 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-slash\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049031 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-kubelet\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049067 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e830f9d6-ce70-4241-afcf-11ea7b532c6c-env-overrides\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049111 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e830f9d6-ce70-4241-afcf-11ea7b532c6c-ovnkube-script-lib\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049141 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-kubelet\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049175 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-slash\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049185 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049137 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-etc-openvswitch\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049256 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-run-openvswitch\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049351 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-cni-bin\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049389 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e830f9d6-ce70-4241-afcf-11ea7b532c6c-ovnkube-config\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049469 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e830f9d6-ce70-4241-afcf-11ea7b532c6c-ovn-node-metrics-cert\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049504 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-cni-bin\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049673 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-var-lib-openvswitch\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049725 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-run-systemd\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049749 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-node-log\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049791 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-cni-netd\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049813 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-run-ovn\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049890 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55f7t\" (UniqueName: \"kubernetes.io/projected/dce51084-e094-437c-a988-66b17982fd5d-kube-api-access-55f7t\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049904 4810 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dce51084-e094-437c-a988-66b17982fd5d-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049917 4810 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dce51084-e094-437c-a988-66b17982fd5d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049952 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-run-ovn\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.049980 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-var-lib-openvswitch\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.050005 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-run-systemd\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.050032 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-host-cni-netd\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.050051 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e830f9d6-ce70-4241-afcf-11ea7b532c6c-node-log\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.050253 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e830f9d6-ce70-4241-afcf-11ea7b532c6c-env-overrides\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.050300 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e830f9d6-ce70-4241-afcf-11ea7b532c6c-ovnkube-script-lib\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.055488 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e830f9d6-ce70-4241-afcf-11ea7b532c6c-ovn-node-metrics-cert\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.063021 4810 scope.go:117] "RemoveContainer" containerID="37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.063991 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t4zqb"] Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.072971 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws6mn\" (UniqueName: \"kubernetes.io/projected/e830f9d6-ce70-4241-afcf-11ea7b532c6c-kube-api-access-ws6mn\") pod \"ovnkube-node-fhv9b\" (UID: \"e830f9d6-ce70-4241-afcf-11ea7b532c6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.075340 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t4zqb"] Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.083016 4810 scope.go:117] "RemoveContainer" containerID="24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.099850 4810 scope.go:117] "RemoveContainer" containerID="e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.127645 4810 scope.go:117] "RemoveContainer" containerID="acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.153640 4810 scope.go:117] "RemoveContainer" containerID="f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.175335 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.185350 4810 scope.go:117] "RemoveContainer" containerID="f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.206331 4810 scope.go:117] "RemoveContainer" containerID="d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.225046 4810 scope.go:117] "RemoveContainer" containerID="f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833" Jan 10 06:57:33 crc kubenswrapper[4810]: E0110 06:57:33.225466 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833\": container with ID starting with f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833 not found: ID does not exist" containerID="f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.225511 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833"} err="failed to get container status \"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833\": rpc error: code = NotFound desc = could not find container \"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833\": container with ID starting with f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.225543 4810 scope.go:117] "RemoveContainer" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:57:33 crc kubenswrapper[4810]: E0110 06:57:33.226012 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\": container with ID starting with 316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8 not found: ID does not exist" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.226063 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8"} err="failed to get container status \"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\": rpc error: code = NotFound desc = could not find container \"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\": container with ID starting with 316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.226086 4810 scope.go:117] "RemoveContainer" containerID="10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c" Jan 10 06:57:33 crc kubenswrapper[4810]: E0110 06:57:33.227366 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\": container with ID starting with 10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c not found: ID does not exist" containerID="10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.227398 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c"} err="failed to get container status \"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\": rpc error: code = NotFound desc = could not find container \"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\": container with ID starting with 10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.227414 4810 scope.go:117] "RemoveContainer" containerID="37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775" Jan 10 06:57:33 crc kubenswrapper[4810]: E0110 06:57:33.227827 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\": container with ID starting with 37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775 not found: ID does not exist" containerID="37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.227846 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775"} err="failed to get container status \"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\": rpc error: code = NotFound desc = could not find container \"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\": container with ID starting with 37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.227860 4810 scope.go:117] "RemoveContainer" containerID="24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2" Jan 10 06:57:33 crc kubenswrapper[4810]: E0110 06:57:33.229881 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\": container with ID starting with 24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2 not found: ID does not exist" containerID="24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.229911 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2"} err="failed to get container status \"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\": rpc error: code = NotFound desc = could not find container \"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\": container with ID starting with 24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.229932 4810 scope.go:117] "RemoveContainer" containerID="e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520" Jan 10 06:57:33 crc kubenswrapper[4810]: E0110 06:57:33.230460 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\": container with ID starting with e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520 not found: ID does not exist" containerID="e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.230533 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520"} err="failed to get container status \"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\": rpc error: code = NotFound desc = could not find container \"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\": container with ID starting with e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.230551 4810 scope.go:117] "RemoveContainer" containerID="acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab" Jan 10 06:57:33 crc kubenswrapper[4810]: E0110 06:57:33.230928 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\": container with ID starting with acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab not found: ID does not exist" containerID="acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.230955 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab"} err="failed to get container status \"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\": rpc error: code = NotFound desc = could not find container \"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\": container with ID starting with acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.230971 4810 scope.go:117] "RemoveContainer" containerID="f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779" Jan 10 06:57:33 crc kubenswrapper[4810]: E0110 06:57:33.231346 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\": container with ID starting with f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779 not found: ID does not exist" containerID="f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.231398 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779"} err="failed to get container status \"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\": rpc error: code = NotFound desc = could not find container \"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\": container with ID starting with f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.231417 4810 scope.go:117] "RemoveContainer" containerID="f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a" Jan 10 06:57:33 crc kubenswrapper[4810]: E0110 06:57:33.231719 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\": container with ID starting with f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a not found: ID does not exist" containerID="f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.231742 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a"} err="failed to get container status \"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\": rpc error: code = NotFound desc = could not find container \"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\": container with ID starting with f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.231756 4810 scope.go:117] "RemoveContainer" containerID="d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be" Jan 10 06:57:33 crc kubenswrapper[4810]: E0110 06:57:33.232037 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\": container with ID starting with d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be not found: ID does not exist" containerID="d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.232060 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be"} err="failed to get container status \"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\": rpc error: code = NotFound desc = could not find container \"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\": container with ID starting with d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.232075 4810 scope.go:117] "RemoveContainer" containerID="f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.232400 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833"} err="failed to get container status \"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833\": rpc error: code = NotFound desc = could not find container \"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833\": container with ID starting with f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.232427 4810 scope.go:117] "RemoveContainer" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.232755 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8"} err="failed to get container status \"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\": rpc error: code = NotFound desc = could not find container \"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\": container with ID starting with 316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.232780 4810 scope.go:117] "RemoveContainer" containerID="10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.233036 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c"} err="failed to get container status \"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\": rpc error: code = NotFound desc = could not find container \"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\": container with ID starting with 10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.233057 4810 scope.go:117] "RemoveContainer" containerID="37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.233411 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775"} err="failed to get container status \"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\": rpc error: code = NotFound desc = could not find container \"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\": container with ID starting with 37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.233430 4810 scope.go:117] "RemoveContainer" containerID="24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.233622 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2"} err="failed to get container status \"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\": rpc error: code = NotFound desc = could not find container \"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\": container with ID starting with 24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.233649 4810 scope.go:117] "RemoveContainer" containerID="e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.233861 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520"} err="failed to get container status \"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\": rpc error: code = NotFound desc = could not find container \"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\": container with ID starting with e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.233879 4810 scope.go:117] "RemoveContainer" containerID="acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.234155 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab"} err="failed to get container status \"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\": rpc error: code = NotFound desc = could not find container \"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\": container with ID starting with acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.234178 4810 scope.go:117] "RemoveContainer" containerID="f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.234424 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779"} err="failed to get container status \"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\": rpc error: code = NotFound desc = could not find container \"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\": container with ID starting with f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.234443 4810 scope.go:117] "RemoveContainer" containerID="f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.234665 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a"} err="failed to get container status \"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\": rpc error: code = NotFound desc = could not find container \"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\": container with ID starting with f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.234687 4810 scope.go:117] "RemoveContainer" containerID="d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.234906 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be"} err="failed to get container status \"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\": rpc error: code = NotFound desc = could not find container \"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\": container with ID starting with d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.234925 4810 scope.go:117] "RemoveContainer" containerID="f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.235113 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833"} err="failed to get container status \"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833\": rpc error: code = NotFound desc = could not find container \"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833\": container with ID starting with f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.235130 4810 scope.go:117] "RemoveContainer" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.235424 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8"} err="failed to get container status \"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\": rpc error: code = NotFound desc = could not find container \"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\": container with ID starting with 316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.235443 4810 scope.go:117] "RemoveContainer" containerID="10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.235662 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c"} err="failed to get container status \"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\": rpc error: code = NotFound desc = could not find container \"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\": container with ID starting with 10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.235682 4810 scope.go:117] "RemoveContainer" containerID="37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.235883 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775"} err="failed to get container status \"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\": rpc error: code = NotFound desc = could not find container \"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\": container with ID starting with 37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.235900 4810 scope.go:117] "RemoveContainer" containerID="24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.236113 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2"} err="failed to get container status \"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\": rpc error: code = NotFound desc = could not find container \"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\": container with ID starting with 24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.236149 4810 scope.go:117] "RemoveContainer" containerID="e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.236460 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520"} err="failed to get container status \"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\": rpc error: code = NotFound desc = could not find container \"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\": container with ID starting with e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.236478 4810 scope.go:117] "RemoveContainer" containerID="acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.236767 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab"} err="failed to get container status \"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\": rpc error: code = NotFound desc = could not find container \"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\": container with ID starting with acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.236788 4810 scope.go:117] "RemoveContainer" containerID="f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.237009 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779"} err="failed to get container status \"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\": rpc error: code = NotFound desc = could not find container \"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\": container with ID starting with f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.237027 4810 scope.go:117] "RemoveContainer" containerID="f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.237234 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a"} err="failed to get container status \"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\": rpc error: code = NotFound desc = could not find container \"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\": container with ID starting with f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.237250 4810 scope.go:117] "RemoveContainer" containerID="d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.237520 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be"} err="failed to get container status \"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\": rpc error: code = NotFound desc = could not find container \"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\": container with ID starting with d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.237562 4810 scope.go:117] "RemoveContainer" containerID="f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.237820 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833"} err="failed to get container status \"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833\": rpc error: code = NotFound desc = could not find container \"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833\": container with ID starting with f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.237838 4810 scope.go:117] "RemoveContainer" containerID="316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.238127 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8"} err="failed to get container status \"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\": rpc error: code = NotFound desc = could not find container \"316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8\": container with ID starting with 316b828f706a778100742e5469ead815e270378cf12011ed20694d15fd871cf8 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.238154 4810 scope.go:117] "RemoveContainer" containerID="10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.238443 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c"} err="failed to get container status \"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\": rpc error: code = NotFound desc = could not find container \"10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c\": container with ID starting with 10268c19949c09a081aea2fe633a36df7a474baed9f3e9b20fc4c384a3d0ad5c not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.238462 4810 scope.go:117] "RemoveContainer" containerID="37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.238699 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775"} err="failed to get container status \"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\": rpc error: code = NotFound desc = could not find container \"37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775\": container with ID starting with 37f549cdad9da041c36d49341c256e730f5c4907d84b353200888a71f7d57775 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.238718 4810 scope.go:117] "RemoveContainer" containerID="24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.238949 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2"} err="failed to get container status \"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\": rpc error: code = NotFound desc = could not find container \"24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2\": container with ID starting with 24c6aa6bbf09dbfd3ef3e7636c1db4ff8d17890cbee9ea47b2a67fb51b5dc9d2 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.238967 4810 scope.go:117] "RemoveContainer" containerID="e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.239178 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520"} err="failed to get container status \"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\": rpc error: code = NotFound desc = could not find container \"e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520\": container with ID starting with e27da25a1df7c864b27fea1ae2ddd7d3e7b2043280c0615076431b11e983b520 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.239229 4810 scope.go:117] "RemoveContainer" containerID="acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.239433 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab"} err="failed to get container status \"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\": rpc error: code = NotFound desc = could not find container \"acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab\": container with ID starting with acb2f79c10cb42529ebbdb4edde213dd02e50bb0c4721cd8efe87e2f01849eab not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.239454 4810 scope.go:117] "RemoveContainer" containerID="f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.239679 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779"} err="failed to get container status \"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\": rpc error: code = NotFound desc = could not find container \"f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779\": container with ID starting with f2eb232a3ab23f87c4530c6bd4dbcce6a60799d7db046baaf68928c8ec4d4779 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.239695 4810 scope.go:117] "RemoveContainer" containerID="f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.239914 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a"} err="failed to get container status \"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\": rpc error: code = NotFound desc = could not find container \"f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a\": container with ID starting with f54cef8b9c707bf7a77970d67cba726455e50f2c13d539fe5bc3cd1b50ce9a0a not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.239935 4810 scope.go:117] "RemoveContainer" containerID="d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.240230 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be"} err="failed to get container status \"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\": rpc error: code = NotFound desc = could not find container \"d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be\": container with ID starting with d44260c0de888ee756803da9949a9170f103b8998d28335d81125e92851b86be not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.240249 4810 scope.go:117] "RemoveContainer" containerID="f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.240509 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833"} err="failed to get container status \"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833\": rpc error: code = NotFound desc = could not find container \"f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833\": container with ID starting with f6c9b893c11d9df63f0f043e07c150e33e4031f956b8ae3d9e9ae7988f4ff833 not found: ID does not exist" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.708270 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce51084-e094-437c-a988-66b17982fd5d" path="/var/lib/kubelet/pods/dce51084-e094-437c-a988-66b17982fd5d/volumes" Jan 10 06:57:33 crc kubenswrapper[4810]: I0110 06:57:33.996531 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" event={"ID":"e830f9d6-ce70-4241-afcf-11ea7b532c6c","Type":"ContainerStarted","Data":"766fb73f2a99ae839a9ce796eff1ca70220b2a4c3e838791227328436a3406d5"} Jan 10 06:57:35 crc kubenswrapper[4810]: I0110 06:57:35.007447 4810 generic.go:334] "Generic (PLEG): container finished" podID="e830f9d6-ce70-4241-afcf-11ea7b532c6c" containerID="21ec22069654ddc73a4551633142615bcb635bde2ff89fe1c08ab0450cc0cb80" exitCode=0 Jan 10 06:57:35 crc kubenswrapper[4810]: I0110 06:57:35.007526 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" event={"ID":"e830f9d6-ce70-4241-afcf-11ea7b532c6c","Type":"ContainerDied","Data":"21ec22069654ddc73a4551633142615bcb635bde2ff89fe1c08ab0450cc0cb80"} Jan 10 06:57:36 crc kubenswrapper[4810]: I0110 06:57:36.020386 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" event={"ID":"e830f9d6-ce70-4241-afcf-11ea7b532c6c","Type":"ContainerStarted","Data":"97752f8726de810d24ab4599cfcd7dbf7bd65c5871e9f61523c8e058628761d6"} Jan 10 06:57:36 crc kubenswrapper[4810]: I0110 06:57:36.020820 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" event={"ID":"e830f9d6-ce70-4241-afcf-11ea7b532c6c","Type":"ContainerStarted","Data":"aeae93c50410b63bd50e9152ffb9deae0b8ca87d79fda74d1d67e291a73592dc"} Jan 10 06:57:37 crc kubenswrapper[4810]: I0110 06:57:37.029396 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" event={"ID":"e830f9d6-ce70-4241-afcf-11ea7b532c6c","Type":"ContainerStarted","Data":"3f7cc289555006d82efc33b852be8ecc79e99d54c9513c4d7480e28bef2c9721"} Jan 10 06:57:37 crc kubenswrapper[4810]: I0110 06:57:37.029829 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" event={"ID":"e830f9d6-ce70-4241-afcf-11ea7b532c6c","Type":"ContainerStarted","Data":"3789379e24bc45b83f27caeec4b0cc2366a4cde740f3070b47cd710545e1d72f"} Jan 10 06:57:37 crc kubenswrapper[4810]: I0110 06:57:37.029858 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" event={"ID":"e830f9d6-ce70-4241-afcf-11ea7b532c6c","Type":"ContainerStarted","Data":"d6ee140aa04aa8b5963deed5bf93f66cdc6573e820fdfc280dd85a618897733e"} Jan 10 06:57:38 crc kubenswrapper[4810]: I0110 06:57:38.040513 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" event={"ID":"e830f9d6-ce70-4241-afcf-11ea7b532c6c","Type":"ContainerStarted","Data":"fa01adfd4a86f3c563f987823a66acb10e2dd5258fa4c434007dc4474ed62dda"} Jan 10 06:57:40 crc kubenswrapper[4810]: I0110 06:57:40.060508 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" event={"ID":"e830f9d6-ce70-4241-afcf-11ea7b532c6c","Type":"ContainerStarted","Data":"3c79db6ae0d0369fca6bd15319655dfca7e21f3f9bf4a50b6546d2e31c2fdb0d"} Jan 10 06:57:42 crc kubenswrapper[4810]: I0110 06:57:42.083022 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" event={"ID":"e830f9d6-ce70-4241-afcf-11ea7b532c6c","Type":"ContainerStarted","Data":"e610cc1c97675e2a09cbc6aaa01a6776daa68e3b71bc4da6bdbc6278d7d4d9c9"} Jan 10 06:57:43 crc kubenswrapper[4810]: I0110 06:57:43.088210 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:43 crc kubenswrapper[4810]: I0110 06:57:43.088531 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:43 crc kubenswrapper[4810]: I0110 06:57:43.088545 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:43 crc kubenswrapper[4810]: I0110 06:57:43.118936 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" podStartSLOduration=11.118918206 podStartE2EDuration="11.118918206s" podCreationTimestamp="2026-01-10 06:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 06:57:43.116516138 +0000 UTC m=+691.732009031" watchObservedRunningTime="2026-01-10 06:57:43.118918206 +0000 UTC m=+691.734411089" Jan 10 06:57:43 crc kubenswrapper[4810]: I0110 06:57:43.150287 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:43 crc kubenswrapper[4810]: I0110 06:57:43.152026 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:57:46 crc kubenswrapper[4810]: I0110 06:57:46.693467 4810 scope.go:117] "RemoveContainer" containerID="119ef4f78bf3e22db81c778c1d3560cd9a432d4c452b44055df285c9a18fa3f0" Jan 10 06:57:46 crc kubenswrapper[4810]: E0110 06:57:46.694643 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-t7gh2_openshift-multus(34d87e8a-cdfb-46ed-97db-2d07cffec516)\"" pod="openshift-multus/multus-t7gh2" podUID="34d87e8a-cdfb-46ed-97db-2d07cffec516" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.664473 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h"] Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.666515 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.674595 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.683117 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h"] Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.826038 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.826157 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.826217 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv9h6\" (UniqueName: \"kubernetes.io/projected/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-kube-api-access-nv9h6\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.927395 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.927492 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.927542 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv9h6\" (UniqueName: \"kubernetes.io/projected/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-kube-api-access-nv9h6\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.928348 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.928337 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.952542 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv9h6\" (UniqueName: \"kubernetes.io/projected/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-kube-api-access-nv9h6\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:57 crc kubenswrapper[4810]: I0110 06:57:57.986668 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:58 crc kubenswrapper[4810]: E0110 06:57:58.028774 4810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace_97ca5502-79e5-4da9-9267-8cd12ce1d9ef_0(4e6055b6a3139f71cbbfb7a303026c566a7fb7522d53495b023caeba54521239): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 10 06:57:58 crc kubenswrapper[4810]: E0110 06:57:58.029008 4810 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace_97ca5502-79e5-4da9-9267-8cd12ce1d9ef_0(4e6055b6a3139f71cbbfb7a303026c566a7fb7522d53495b023caeba54521239): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:58 crc kubenswrapper[4810]: E0110 06:57:58.029324 4810 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace_97ca5502-79e5-4da9-9267-8cd12ce1d9ef_0(4e6055b6a3139f71cbbfb7a303026c566a7fb7522d53495b023caeba54521239): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:58 crc kubenswrapper[4810]: E0110 06:57:58.029515 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace(97ca5502-79e5-4da9-9267-8cd12ce1d9ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace(97ca5502-79e5-4da9-9267-8cd12ce1d9ef)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace_97ca5502-79e5-4da9-9267-8cd12ce1d9ef_0(4e6055b6a3139f71cbbfb7a303026c566a7fb7522d53495b023caeba54521239): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" podUID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" Jan 10 06:57:58 crc kubenswrapper[4810]: I0110 06:57:58.198748 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:58 crc kubenswrapper[4810]: I0110 06:57:58.199456 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:58 crc kubenswrapper[4810]: E0110 06:57:58.234767 4810 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace_97ca5502-79e5-4da9-9267-8cd12ce1d9ef_0(dc5d51edd6d43b841b555775b22442fd0a97a4047c1c68922863a727669d7851): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 10 06:57:58 crc kubenswrapper[4810]: E0110 06:57:58.234858 4810 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace_97ca5502-79e5-4da9-9267-8cd12ce1d9ef_0(dc5d51edd6d43b841b555775b22442fd0a97a4047c1c68922863a727669d7851): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:58 crc kubenswrapper[4810]: E0110 06:57:58.234895 4810 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace_97ca5502-79e5-4da9-9267-8cd12ce1d9ef_0(dc5d51edd6d43b841b555775b22442fd0a97a4047c1c68922863a727669d7851): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:57:58 crc kubenswrapper[4810]: E0110 06:57:58.234969 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace(97ca5502-79e5-4da9-9267-8cd12ce1d9ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace(97ca5502-79e5-4da9-9267-8cd12ce1d9ef)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_openshift-marketplace_97ca5502-79e5-4da9-9267-8cd12ce1d9ef_0(dc5d51edd6d43b841b555775b22442fd0a97a4047c1c68922863a727669d7851): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" podUID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" Jan 10 06:58:00 crc kubenswrapper[4810]: I0110 06:58:00.693622 4810 scope.go:117] "RemoveContainer" containerID="119ef4f78bf3e22db81c778c1d3560cd9a432d4c452b44055df285c9a18fa3f0" Jan 10 06:58:01 crc kubenswrapper[4810]: I0110 06:58:01.219898 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7gh2_34d87e8a-cdfb-46ed-97db-2d07cffec516/kube-multus/2.log" Jan 10 06:58:01 crc kubenswrapper[4810]: I0110 06:58:01.221178 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7gh2_34d87e8a-cdfb-46ed-97db-2d07cffec516/kube-multus/1.log" Jan 10 06:58:01 crc kubenswrapper[4810]: I0110 06:58:01.221296 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-t7gh2" event={"ID":"34d87e8a-cdfb-46ed-97db-2d07cffec516","Type":"ContainerStarted","Data":"7b12678c96813b1fa9d82f685b9e7ae2bfb4cb708944bc60c0cf43b6dbefe9b9"} Jan 10 06:58:03 crc kubenswrapper[4810]: I0110 06:58:03.209219 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fhv9b" Jan 10 06:58:12 crc kubenswrapper[4810]: I0110 06:58:12.032683 4810 scope.go:117] "RemoveContainer" containerID="41fc6d8fbe9181574c77098b0dbec21e58b12499730756b917ff572ce4513f94" Jan 10 06:58:12 crc kubenswrapper[4810]: I0110 06:58:12.692850 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:58:12 crc kubenswrapper[4810]: I0110 06:58:12.693603 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:58:13 crc kubenswrapper[4810]: I0110 06:58:13.243414 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h"] Jan 10 06:58:13 crc kubenswrapper[4810]: W0110 06:58:13.253709 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97ca5502_79e5_4da9_9267_8cd12ce1d9ef.slice/crio-2eb710c66674732c1adc1f314846bd330dc57a2d695c9f568083fcfc32b23fa5 WatchSource:0}: Error finding container 2eb710c66674732c1adc1f314846bd330dc57a2d695c9f568083fcfc32b23fa5: Status 404 returned error can't find the container with id 2eb710c66674732c1adc1f314846bd330dc57a2d695c9f568083fcfc32b23fa5 Jan 10 06:58:13 crc kubenswrapper[4810]: I0110 06:58:13.307723 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" event={"ID":"97ca5502-79e5-4da9-9267-8cd12ce1d9ef","Type":"ContainerStarted","Data":"2eb710c66674732c1adc1f314846bd330dc57a2d695c9f568083fcfc32b23fa5"} Jan 10 06:58:13 crc kubenswrapper[4810]: I0110 06:58:13.312955 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-t7gh2_34d87e8a-cdfb-46ed-97db-2d07cffec516/kube-multus/2.log" Jan 10 06:58:14 crc kubenswrapper[4810]: I0110 06:58:14.322430 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" event={"ID":"97ca5502-79e5-4da9-9267-8cd12ce1d9ef","Type":"ContainerStarted","Data":"36ea348a6dc9f5bab1be444de5b6df8a63175158821b76fcaf6f6f3840e4a953"} Jan 10 06:58:15 crc kubenswrapper[4810]: I0110 06:58:15.335856 4810 generic.go:334] "Generic (PLEG): container finished" podID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" containerID="36ea348a6dc9f5bab1be444de5b6df8a63175158821b76fcaf6f6f3840e4a953" exitCode=0 Jan 10 06:58:15 crc kubenswrapper[4810]: I0110 06:58:15.335923 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" event={"ID":"97ca5502-79e5-4da9-9267-8cd12ce1d9ef","Type":"ContainerDied","Data":"36ea348a6dc9f5bab1be444de5b6df8a63175158821b76fcaf6f6f3840e4a953"} Jan 10 06:58:15 crc kubenswrapper[4810]: I0110 06:58:15.338944 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 10 06:58:17 crc kubenswrapper[4810]: I0110 06:58:17.351755 4810 generic.go:334] "Generic (PLEG): container finished" podID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" containerID="27dc428d0ebdf9612bca4fe31eddb915cf4c63e8afef699607a9c33b1449ff38" exitCode=0 Jan 10 06:58:17 crc kubenswrapper[4810]: I0110 06:58:17.351907 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" event={"ID":"97ca5502-79e5-4da9-9267-8cd12ce1d9ef","Type":"ContainerDied","Data":"27dc428d0ebdf9612bca4fe31eddb915cf4c63e8afef699607a9c33b1449ff38"} Jan 10 06:58:18 crc kubenswrapper[4810]: I0110 06:58:18.363520 4810 generic.go:334] "Generic (PLEG): container finished" podID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" containerID="2a5e274c0009cd9a9cb09428e2eefe5bfd6692321169acff74587616b698a91e" exitCode=0 Jan 10 06:58:18 crc kubenswrapper[4810]: I0110 06:58:18.363708 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" event={"ID":"97ca5502-79e5-4da9-9267-8cd12ce1d9ef","Type":"ContainerDied","Data":"2a5e274c0009cd9a9cb09428e2eefe5bfd6692321169acff74587616b698a91e"} Jan 10 06:58:19 crc kubenswrapper[4810]: I0110 06:58:19.652255 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:58:19 crc kubenswrapper[4810]: I0110 06:58:19.707344 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-util\") pod \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " Jan 10 06:58:19 crc kubenswrapper[4810]: I0110 06:58:19.707487 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv9h6\" (UniqueName: \"kubernetes.io/projected/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-kube-api-access-nv9h6\") pod \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " Jan 10 06:58:19 crc kubenswrapper[4810]: I0110 06:58:19.707517 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-bundle\") pod \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\" (UID: \"97ca5502-79e5-4da9-9267-8cd12ce1d9ef\") " Jan 10 06:58:19 crc kubenswrapper[4810]: I0110 06:58:19.709416 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-bundle" (OuterVolumeSpecName: "bundle") pod "97ca5502-79e5-4da9-9267-8cd12ce1d9ef" (UID: "97ca5502-79e5-4da9-9267-8cd12ce1d9ef"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:58:19 crc kubenswrapper[4810]: I0110 06:58:19.713925 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-kube-api-access-nv9h6" (OuterVolumeSpecName: "kube-api-access-nv9h6") pod "97ca5502-79e5-4da9-9267-8cd12ce1d9ef" (UID: "97ca5502-79e5-4da9-9267-8cd12ce1d9ef"). InnerVolumeSpecName "kube-api-access-nv9h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:58:19 crc kubenswrapper[4810]: I0110 06:58:19.732335 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-util" (OuterVolumeSpecName: "util") pod "97ca5502-79e5-4da9-9267-8cd12ce1d9ef" (UID: "97ca5502-79e5-4da9-9267-8cd12ce1d9ef"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:58:19 crc kubenswrapper[4810]: I0110 06:58:19.808425 4810 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-util\") on node \"crc\" DevicePath \"\"" Jan 10 06:58:19 crc kubenswrapper[4810]: I0110 06:58:19.808479 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv9h6\" (UniqueName: \"kubernetes.io/projected/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-kube-api-access-nv9h6\") on node \"crc\" DevicePath \"\"" Jan 10 06:58:19 crc kubenswrapper[4810]: I0110 06:58:19.808496 4810 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97ca5502-79e5-4da9-9267-8cd12ce1d9ef-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 06:58:20 crc kubenswrapper[4810]: I0110 06:58:20.380572 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" event={"ID":"97ca5502-79e5-4da9-9267-8cd12ce1d9ef","Type":"ContainerDied","Data":"2eb710c66674732c1adc1f314846bd330dc57a2d695c9f568083fcfc32b23fa5"} Jan 10 06:58:20 crc kubenswrapper[4810]: I0110 06:58:20.380634 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eb710c66674732c1adc1f314846bd330dc57a2d695c9f568083fcfc32b23fa5" Jan 10 06:58:20 crc kubenswrapper[4810]: I0110 06:58:20.380656 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.860752 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-78d994474f-c66kc"] Jan 10 06:58:30 crc kubenswrapper[4810]: E0110 06:58:30.861776 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" containerName="util" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.861794 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" containerName="util" Jan 10 06:58:30 crc kubenswrapper[4810]: E0110 06:58:30.861806 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" containerName="extract" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.861814 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" containerName="extract" Jan 10 06:58:30 crc kubenswrapper[4810]: E0110 06:58:30.861851 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" containerName="pull" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.861861 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" containerName="pull" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.861995 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ca5502-79e5-4da9-9267-8cd12ce1d9ef" containerName="extract" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.862559 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.865815 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.866207 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.866336 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.866698 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nqrmd" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.867283 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.918458 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78d994474f-c66kc"] Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.948075 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c2f8c1a-da9b-4758-8470-7495e89762af-webhook-cert\") pod \"metallb-operator-controller-manager-78d994474f-c66kc\" (UID: \"1c2f8c1a-da9b-4758-8470-7495e89762af\") " pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.948251 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ppf5\" (UniqueName: \"kubernetes.io/projected/1c2f8c1a-da9b-4758-8470-7495e89762af-kube-api-access-7ppf5\") pod \"metallb-operator-controller-manager-78d994474f-c66kc\" (UID: \"1c2f8c1a-da9b-4758-8470-7495e89762af\") " pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:30 crc kubenswrapper[4810]: I0110 06:58:30.948395 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c2f8c1a-da9b-4758-8470-7495e89762af-apiservice-cert\") pod \"metallb-operator-controller-manager-78d994474f-c66kc\" (UID: \"1c2f8c1a-da9b-4758-8470-7495e89762af\") " pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.049174 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c2f8c1a-da9b-4758-8470-7495e89762af-apiservice-cert\") pod \"metallb-operator-controller-manager-78d994474f-c66kc\" (UID: \"1c2f8c1a-da9b-4758-8470-7495e89762af\") " pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.049250 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c2f8c1a-da9b-4758-8470-7495e89762af-webhook-cert\") pod \"metallb-operator-controller-manager-78d994474f-c66kc\" (UID: \"1c2f8c1a-da9b-4758-8470-7495e89762af\") " pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.049290 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ppf5\" (UniqueName: \"kubernetes.io/projected/1c2f8c1a-da9b-4758-8470-7495e89762af-kube-api-access-7ppf5\") pod \"metallb-operator-controller-manager-78d994474f-c66kc\" (UID: \"1c2f8c1a-da9b-4758-8470-7495e89762af\") " pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.057400 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c2f8c1a-da9b-4758-8470-7495e89762af-apiservice-cert\") pod \"metallb-operator-controller-manager-78d994474f-c66kc\" (UID: \"1c2f8c1a-da9b-4758-8470-7495e89762af\") " pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.058211 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c2f8c1a-da9b-4758-8470-7495e89762af-webhook-cert\") pod \"metallb-operator-controller-manager-78d994474f-c66kc\" (UID: \"1c2f8c1a-da9b-4758-8470-7495e89762af\") " pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.071068 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ppf5\" (UniqueName: \"kubernetes.io/projected/1c2f8c1a-da9b-4758-8470-7495e89762af-kube-api-access-7ppf5\") pod \"metallb-operator-controller-manager-78d994474f-c66kc\" (UID: \"1c2f8c1a-da9b-4758-8470-7495e89762af\") " pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.179346 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.187162 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2"] Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.188095 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.191102 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.191102 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.192060 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-npghr" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.205293 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2"] Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.356938 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5e40a28-b80a-4ba9-87e7-039e63e9e4d0-apiservice-cert\") pod \"metallb-operator-webhook-server-75f446db8b-dsgn2\" (UID: \"d5e40a28-b80a-4ba9-87e7-039e63e9e4d0\") " pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.356974 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5e40a28-b80a-4ba9-87e7-039e63e9e4d0-webhook-cert\") pod \"metallb-operator-webhook-server-75f446db8b-dsgn2\" (UID: \"d5e40a28-b80a-4ba9-87e7-039e63e9e4d0\") " pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.357021 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlt5m\" (UniqueName: \"kubernetes.io/projected/d5e40a28-b80a-4ba9-87e7-039e63e9e4d0-kube-api-access-mlt5m\") pod \"metallb-operator-webhook-server-75f446db8b-dsgn2\" (UID: \"d5e40a28-b80a-4ba9-87e7-039e63e9e4d0\") " pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.459097 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5e40a28-b80a-4ba9-87e7-039e63e9e4d0-apiservice-cert\") pod \"metallb-operator-webhook-server-75f446db8b-dsgn2\" (UID: \"d5e40a28-b80a-4ba9-87e7-039e63e9e4d0\") " pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.459470 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5e40a28-b80a-4ba9-87e7-039e63e9e4d0-webhook-cert\") pod \"metallb-operator-webhook-server-75f446db8b-dsgn2\" (UID: \"d5e40a28-b80a-4ba9-87e7-039e63e9e4d0\") " pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.459525 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlt5m\" (UniqueName: \"kubernetes.io/projected/d5e40a28-b80a-4ba9-87e7-039e63e9e4d0-kube-api-access-mlt5m\") pod \"metallb-operator-webhook-server-75f446db8b-dsgn2\" (UID: \"d5e40a28-b80a-4ba9-87e7-039e63e9e4d0\") " pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.463843 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5e40a28-b80a-4ba9-87e7-039e63e9e4d0-webhook-cert\") pod \"metallb-operator-webhook-server-75f446db8b-dsgn2\" (UID: \"d5e40a28-b80a-4ba9-87e7-039e63e9e4d0\") " pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.480550 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlt5m\" (UniqueName: \"kubernetes.io/projected/d5e40a28-b80a-4ba9-87e7-039e63e9e4d0-kube-api-access-mlt5m\") pod \"metallb-operator-webhook-server-75f446db8b-dsgn2\" (UID: \"d5e40a28-b80a-4ba9-87e7-039e63e9e4d0\") " pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.489077 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5e40a28-b80a-4ba9-87e7-039e63e9e4d0-apiservice-cert\") pod \"metallb-operator-webhook-server-75f446db8b-dsgn2\" (UID: \"d5e40a28-b80a-4ba9-87e7-039e63e9e4d0\") " pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.531701 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.626339 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78d994474f-c66kc"] Jan 10 06:58:31 crc kubenswrapper[4810]: W0110 06:58:31.640321 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c2f8c1a_da9b_4758_8470_7495e89762af.slice/crio-25e1844fc23869e983015f8b47bafa1793b018bab2f178bf125cf6f9d32ba2e7 WatchSource:0}: Error finding container 25e1844fc23869e983015f8b47bafa1793b018bab2f178bf125cf6f9d32ba2e7: Status 404 returned error can't find the container with id 25e1844fc23869e983015f8b47bafa1793b018bab2f178bf125cf6f9d32ba2e7 Jan 10 06:58:31 crc kubenswrapper[4810]: I0110 06:58:31.759164 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2"] Jan 10 06:58:31 crc kubenswrapper[4810]: W0110 06:58:31.766689 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5e40a28_b80a_4ba9_87e7_039e63e9e4d0.slice/crio-454a1d5cd27a69a26d10ef1565690fbaaff6cc3903a06a94560920e8ff3124a3 WatchSource:0}: Error finding container 454a1d5cd27a69a26d10ef1565690fbaaff6cc3903a06a94560920e8ff3124a3: Status 404 returned error can't find the container with id 454a1d5cd27a69a26d10ef1565690fbaaff6cc3903a06a94560920e8ff3124a3 Jan 10 06:58:32 crc kubenswrapper[4810]: I0110 06:58:32.461571 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" event={"ID":"1c2f8c1a-da9b-4758-8470-7495e89762af","Type":"ContainerStarted","Data":"25e1844fc23869e983015f8b47bafa1793b018bab2f178bf125cf6f9d32ba2e7"} Jan 10 06:58:32 crc kubenswrapper[4810]: I0110 06:58:32.468303 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" event={"ID":"d5e40a28-b80a-4ba9-87e7-039e63e9e4d0","Type":"ContainerStarted","Data":"454a1d5cd27a69a26d10ef1565690fbaaff6cc3903a06a94560920e8ff3124a3"} Jan 10 06:58:38 crc kubenswrapper[4810]: I0110 06:58:38.529124 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" event={"ID":"1c2f8c1a-da9b-4758-8470-7495e89762af","Type":"ContainerStarted","Data":"b0253b8d743d6b7b8e3bb5f8b35f6243098a860166289100fe00796486d44006"} Jan 10 06:58:38 crc kubenswrapper[4810]: I0110 06:58:38.529947 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:58:38 crc kubenswrapper[4810]: I0110 06:58:38.531824 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" event={"ID":"d5e40a28-b80a-4ba9-87e7-039e63e9e4d0","Type":"ContainerStarted","Data":"f4f2775c4d83e80d0159f866aac9da9f8a5f70c1571b87d4d9640a8b22bcab45"} Jan 10 06:58:38 crc kubenswrapper[4810]: I0110 06:58:38.532100 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:58:38 crc kubenswrapper[4810]: I0110 06:58:38.565107 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" podStartSLOduration=2.073415728 podStartE2EDuration="8.565083816s" podCreationTimestamp="2026-01-10 06:58:30 +0000 UTC" firstStartedPulling="2026-01-10 06:58:31.643299282 +0000 UTC m=+740.258792165" lastFinishedPulling="2026-01-10 06:58:38.13496737 +0000 UTC m=+746.750460253" observedRunningTime="2026-01-10 06:58:38.559579914 +0000 UTC m=+747.175072807" watchObservedRunningTime="2026-01-10 06:58:38.565083816 +0000 UTC m=+747.180576709" Jan 10 06:58:38 crc kubenswrapper[4810]: I0110 06:58:38.595368 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" podStartSLOduration=1.212711983 podStartE2EDuration="7.595338458s" podCreationTimestamp="2026-01-10 06:58:31 +0000 UTC" firstStartedPulling="2026-01-10 06:58:31.770963149 +0000 UTC m=+740.386456022" lastFinishedPulling="2026-01-10 06:58:38.153589614 +0000 UTC m=+746.769082497" observedRunningTime="2026-01-10 06:58:38.591847384 +0000 UTC m=+747.207340287" watchObservedRunningTime="2026-01-10 06:58:38.595338458 +0000 UTC m=+747.210831421" Jan 10 06:58:49 crc kubenswrapper[4810]: I0110 06:58:49.684568 4810 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 10 06:58:51 crc kubenswrapper[4810]: I0110 06:58:51.537082 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-75f446db8b-dsgn2" Jan 10 06:59:11 crc kubenswrapper[4810]: I0110 06:59:11.183157 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-78d994474f-c66kc" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.120840 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms"] Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.121638 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.124083 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5wkzh" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.124086 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.140353 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fxt7n"] Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.143077 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.144011 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms"] Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.145580 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn6x9\" (UniqueName: \"kubernetes.io/projected/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-kube-api-access-rn6x9\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.145622 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-frr-startup\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.145661 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-frr-conf\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.145678 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-frr-sockets\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.145705 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-metrics-certs\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.145725 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-metrics\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.145745 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5n6p\" (UniqueName: \"kubernetes.io/projected/aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed-kube-api-access-r5n6p\") pod \"frr-k8s-webhook-server-7784b6fcf-rhrms\" (UID: \"aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.145766 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-reloader\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.145783 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-rhrms\" (UID: \"aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.146758 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.147161 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.203313 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7rjff"] Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.206336 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.208593 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.208758 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.208776 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.208954 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rppdr" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.216142 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-qv4c2"] Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.216964 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.220358 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.231122 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-qv4c2"] Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247302 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn6x9\" (UniqueName: \"kubernetes.io/projected/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-kube-api-access-rn6x9\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247338 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/98e1cf89-483c-49fc-a8b8-e89615b4d86d-metallb-excludel2\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247360 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-frr-startup\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247379 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-cert\") pod \"controller-5bddd4b946-qv4c2\" (UID: \"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d\") " pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247405 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntblf\" (UniqueName: \"kubernetes.io/projected/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-kube-api-access-ntblf\") pod \"controller-5bddd4b946-qv4c2\" (UID: \"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d\") " pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247424 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-metrics-certs\") pod \"controller-5bddd4b946-qv4c2\" (UID: \"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d\") " pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247444 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-frr-sockets\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247461 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-frr-conf\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247478 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-metrics-certs\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247518 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-metrics-certs\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247535 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p26zn\" (UniqueName: \"kubernetes.io/projected/98e1cf89-483c-49fc-a8b8-e89615b4d86d-kube-api-access-p26zn\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247554 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247569 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-metrics\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247586 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5n6p\" (UniqueName: \"kubernetes.io/projected/aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed-kube-api-access-r5n6p\") pod \"frr-k8s-webhook-server-7784b6fcf-rhrms\" (UID: \"aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247605 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-reloader\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.247619 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-rhrms\" (UID: \"aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.247721 4810 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.247761 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed-cert podName:aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed nodeName:}" failed. No retries permitted until 2026-01-10 06:59:12.747746003 +0000 UTC m=+781.363238886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed-cert") pod "frr-k8s-webhook-server-7784b6fcf-rhrms" (UID: "aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed") : secret "frr-k8s-webhook-server-cert" not found Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.248131 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-frr-sockets\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.248267 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-frr-startup\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.248323 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-frr-conf\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.248428 4810 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.248671 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-reloader\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.248695 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-metrics-certs podName:ec952ca1-31c1-4f2c-ae6a-2a8c271304e0 nodeName:}" failed. No retries permitted until 2026-01-10 06:59:12.748677884 +0000 UTC m=+781.364170767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-metrics-certs") pod "frr-k8s-fxt7n" (UID: "ec952ca1-31c1-4f2c-ae6a-2a8c271304e0") : secret "frr-k8s-certs-secret" not found Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.248621 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-metrics\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.267957 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn6x9\" (UniqueName: \"kubernetes.io/projected/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-kube-api-access-rn6x9\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.270825 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5n6p\" (UniqueName: \"kubernetes.io/projected/aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed-kube-api-access-r5n6p\") pod \"frr-k8s-webhook-server-7784b6fcf-rhrms\" (UID: \"aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.348832 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-metrics-certs\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.348967 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p26zn\" (UniqueName: \"kubernetes.io/projected/98e1cf89-483c-49fc-a8b8-e89615b4d86d-kube-api-access-p26zn\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.348991 4810 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.349013 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.349061 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-metrics-certs podName:98e1cf89-483c-49fc-a8b8-e89615b4d86d nodeName:}" failed. No retries permitted until 2026-01-10 06:59:12.8490439 +0000 UTC m=+781.464536773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-metrics-certs") pod "speaker-7rjff" (UID: "98e1cf89-483c-49fc-a8b8-e89615b4d86d") : secret "speaker-certs-secret" not found Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.349094 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/98e1cf89-483c-49fc-a8b8-e89615b4d86d-metallb-excludel2\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.349123 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-cert\") pod \"controller-5bddd4b946-qv4c2\" (UID: \"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d\") " pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.349151 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntblf\" (UniqueName: \"kubernetes.io/projected/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-kube-api-access-ntblf\") pod \"controller-5bddd4b946-qv4c2\" (UID: \"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d\") " pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.349171 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-metrics-certs\") pod \"controller-5bddd4b946-qv4c2\" (UID: \"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d\") " pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.349371 4810 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.349447 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-metrics-certs podName:7a4eb8a0-6ca1-4dca-a9a5-37a00569037d nodeName:}" failed. No retries permitted until 2026-01-10 06:59:12.849422939 +0000 UTC m=+781.464915822 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-metrics-certs") pod "controller-5bddd4b946-qv4c2" (UID: "7a4eb8a0-6ca1-4dca-a9a5-37a00569037d") : secret "controller-certs-secret" not found Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.349501 4810 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.349573 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist podName:98e1cf89-483c-49fc-a8b8-e89615b4d86d nodeName:}" failed. No retries permitted until 2026-01-10 06:59:12.849551322 +0000 UTC m=+781.465044435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist") pod "speaker-7rjff" (UID: "98e1cf89-483c-49fc-a8b8-e89615b4d86d") : secret "metallb-memberlist" not found Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.349782 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/98e1cf89-483c-49fc-a8b8-e89615b4d86d-metallb-excludel2\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.351232 4810 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.368534 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-cert\") pod \"controller-5bddd4b946-qv4c2\" (UID: \"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d\") " pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.372083 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p26zn\" (UniqueName: \"kubernetes.io/projected/98e1cf89-483c-49fc-a8b8-e89615b4d86d-kube-api-access-p26zn\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.373706 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntblf\" (UniqueName: \"kubernetes.io/projected/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-kube-api-access-ntblf\") pod \"controller-5bddd4b946-qv4c2\" (UID: \"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d\") " pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.752913 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-metrics-certs\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.753374 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-rhrms\" (UID: \"aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.758021 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ec952ca1-31c1-4f2c-ae6a-2a8c271304e0-metrics-certs\") pod \"frr-k8s-fxt7n\" (UID: \"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0\") " pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.758920 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-rhrms\" (UID: \"aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.779763 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.854228 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.854326 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-metrics-certs\") pod \"controller-5bddd4b946-qv4c2\" (UID: \"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d\") " pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.854361 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-metrics-certs\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.854404 4810 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 10 06:59:12 crc kubenswrapper[4810]: E0110 06:59:12.854487 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist podName:98e1cf89-483c-49fc-a8b8-e89615b4d86d nodeName:}" failed. No retries permitted until 2026-01-10 06:59:13.854464993 +0000 UTC m=+782.469957886 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist") pod "speaker-7rjff" (UID: "98e1cf89-483c-49fc-a8b8-e89615b4d86d") : secret "metallb-memberlist" not found Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.860607 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7a4eb8a0-6ca1-4dca-a9a5-37a00569037d-metrics-certs\") pod \"controller-5bddd4b946-qv4c2\" (UID: \"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d\") " pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:12 crc kubenswrapper[4810]: I0110 06:59:12.861001 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-metrics-certs\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:13 crc kubenswrapper[4810]: I0110 06:59:13.044155 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" Jan 10 06:59:13 crc kubenswrapper[4810]: I0110 06:59:13.147787 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:13 crc kubenswrapper[4810]: I0110 06:59:13.240828 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms"] Jan 10 06:59:13 crc kubenswrapper[4810]: W0110 06:59:13.247627 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa0c6bd0_4efa_4639_b474_b77ee0c4a2ed.slice/crio-a8e15e80fe00c030c8af7a8f9b885a464e3ba0baffce2a532527700166fdc101 WatchSource:0}: Error finding container a8e15e80fe00c030c8af7a8f9b885a464e3ba0baffce2a532527700166fdc101: Status 404 returned error can't find the container with id a8e15e80fe00c030c8af7a8f9b885a464e3ba0baffce2a532527700166fdc101 Jan 10 06:59:13 crc kubenswrapper[4810]: I0110 06:59:13.339106 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-qv4c2"] Jan 10 06:59:13 crc kubenswrapper[4810]: W0110 06:59:13.345095 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a4eb8a0_6ca1_4dca_a9a5_37a00569037d.slice/crio-f6ce9240e68497276fad0674c6f908280457240363d5ae63018596800a365197 WatchSource:0}: Error finding container f6ce9240e68497276fad0674c6f908280457240363d5ae63018596800a365197: Status 404 returned error can't find the container with id f6ce9240e68497276fad0674c6f908280457240363d5ae63018596800a365197 Jan 10 06:59:13 crc kubenswrapper[4810]: I0110 06:59:13.872091 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:13 crc kubenswrapper[4810]: E0110 06:59:13.872327 4810 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 10 06:59:13 crc kubenswrapper[4810]: E0110 06:59:13.872668 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist podName:98e1cf89-483c-49fc-a8b8-e89615b4d86d nodeName:}" failed. No retries permitted until 2026-01-10 06:59:15.872646044 +0000 UTC m=+784.488138937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist") pod "speaker-7rjff" (UID: "98e1cf89-483c-49fc-a8b8-e89615b4d86d") : secret "metallb-memberlist" not found Jan 10 06:59:14 crc kubenswrapper[4810]: I0110 06:59:14.222018 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" event={"ID":"aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed","Type":"ContainerStarted","Data":"a8e15e80fe00c030c8af7a8f9b885a464e3ba0baffce2a532527700166fdc101"} Jan 10 06:59:14 crc kubenswrapper[4810]: I0110 06:59:14.223081 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-qv4c2" event={"ID":"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d","Type":"ContainerStarted","Data":"f6ce9240e68497276fad0674c6f908280457240363d5ae63018596800a365197"} Jan 10 06:59:15 crc kubenswrapper[4810]: I0110 06:59:15.234282 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fxt7n" event={"ID":"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0","Type":"ContainerStarted","Data":"60613e32a72d98e6e9d21f64a2a9f63d38990c539ff8cb9bc5d493e6b4ed7c37"} Jan 10 06:59:15 crc kubenswrapper[4810]: I0110 06:59:15.235686 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-qv4c2" event={"ID":"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d","Type":"ContainerStarted","Data":"78b009f2102c91a93161c57536b4da5bddee3bf08a8b24691b6eb0999d34ac58"} Jan 10 06:59:15 crc kubenswrapper[4810]: I0110 06:59:15.895765 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:15 crc kubenswrapper[4810]: I0110 06:59:15.903418 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/98e1cf89-483c-49fc-a8b8-e89615b4d86d-memberlist\") pod \"speaker-7rjff\" (UID: \"98e1cf89-483c-49fc-a8b8-e89615b4d86d\") " pod="metallb-system/speaker-7rjff" Jan 10 06:59:16 crc kubenswrapper[4810]: I0110 06:59:16.137823 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7rjff" Jan 10 06:59:16 crc kubenswrapper[4810]: I0110 06:59:16.248544 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7rjff" event={"ID":"98e1cf89-483c-49fc-a8b8-e89615b4d86d","Type":"ContainerStarted","Data":"dc70b4f97cd05c6cd359ead4874018db75f5986d04db30fa6dddc19d70498beb"} Jan 10 06:59:17 crc kubenswrapper[4810]: I0110 06:59:17.256303 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7rjff" event={"ID":"98e1cf89-483c-49fc-a8b8-e89615b4d86d","Type":"ContainerStarted","Data":"f290293266efb73bb385ce4d94b0854e7b5e85a6d8444cc464854ee1e087732a"} Jan 10 06:59:20 crc kubenswrapper[4810]: I0110 06:59:20.882907 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 06:59:20 crc kubenswrapper[4810]: I0110 06:59:20.882985 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.297324 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" event={"ID":"aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed","Type":"ContainerStarted","Data":"5a2ff332728c2ac6287c676382cc5a8005d38484177d24b8ee57a2bfaad98250"} Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.298090 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.304391 4810 generic.go:334] "Generic (PLEG): container finished" podID="ec952ca1-31c1-4f2c-ae6a-2a8c271304e0" containerID="fe9af876b2e5e347d9a6acd12e5685382b7c3bc9f420e008a86af51056c83a75" exitCode=0 Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.304476 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fxt7n" event={"ID":"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0","Type":"ContainerDied","Data":"fe9af876b2e5e347d9a6acd12e5685382b7c3bc9f420e008a86af51056c83a75"} Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.307381 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-qv4c2" event={"ID":"7a4eb8a0-6ca1-4dca-a9a5-37a00569037d","Type":"ContainerStarted","Data":"03668b6372b76e4c5c5370f11568267f6725fd7dedb55fb3a213625474cd0075"} Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.307702 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.311648 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-qv4c2" Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.317661 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7rjff" event={"ID":"98e1cf89-483c-49fc-a8b8-e89615b4d86d","Type":"ContainerStarted","Data":"f4cd8e51b451981892f767bf0b913f33d5c57181114852b4f051dd9b9dec0adb"} Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.318442 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7rjff" Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.336248 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" podStartSLOduration=2.611368526 podStartE2EDuration="12.336221203s" podCreationTimestamp="2026-01-10 06:59:12 +0000 UTC" firstStartedPulling="2026-01-10 06:59:13.251066449 +0000 UTC m=+781.866559332" lastFinishedPulling="2026-01-10 06:59:22.975919086 +0000 UTC m=+791.591412009" observedRunningTime="2026-01-10 06:59:24.328109449 +0000 UTC m=+792.943602342" watchObservedRunningTime="2026-01-10 06:59:24.336221203 +0000 UTC m=+792.951714116" Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.365454 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-qv4c2" podStartSLOduration=3.898202049 podStartE2EDuration="12.365423759s" podCreationTimestamp="2026-01-10 06:59:12 +0000 UTC" firstStartedPulling="2026-01-10 06:59:14.499515386 +0000 UTC m=+783.115008269" lastFinishedPulling="2026-01-10 06:59:22.966737046 +0000 UTC m=+791.582229979" observedRunningTime="2026-01-10 06:59:24.360227356 +0000 UTC m=+792.975720279" watchObservedRunningTime="2026-01-10 06:59:24.365423759 +0000 UTC m=+792.980916662" Jan 10 06:59:24 crc kubenswrapper[4810]: I0110 06:59:24.426460 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7rjff" podStartSLOduration=6.136226785 podStartE2EDuration="12.426443386s" podCreationTimestamp="2026-01-10 06:59:12 +0000 UTC" firstStartedPulling="2026-01-10 06:59:16.68508804 +0000 UTC m=+785.300580923" lastFinishedPulling="2026-01-10 06:59:22.975304611 +0000 UTC m=+791.590797524" observedRunningTime="2026-01-10 06:59:24.422498321 +0000 UTC m=+793.037991214" watchObservedRunningTime="2026-01-10 06:59:24.426443386 +0000 UTC m=+793.041936279" Jan 10 06:59:25 crc kubenswrapper[4810]: I0110 06:59:25.329077 4810 generic.go:334] "Generic (PLEG): container finished" podID="ec952ca1-31c1-4f2c-ae6a-2a8c271304e0" containerID="8ee22ffc80cc6eb8fa0d8298d460f6e7e325cb81725361fb18999739cce19273" exitCode=0 Jan 10 06:59:25 crc kubenswrapper[4810]: I0110 06:59:25.331808 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fxt7n" event={"ID":"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0","Type":"ContainerDied","Data":"8ee22ffc80cc6eb8fa0d8298d460f6e7e325cb81725361fb18999739cce19273"} Jan 10 06:59:26 crc kubenswrapper[4810]: I0110 06:59:26.142883 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7rjff" Jan 10 06:59:26 crc kubenswrapper[4810]: I0110 06:59:26.341989 4810 generic.go:334] "Generic (PLEG): container finished" podID="ec952ca1-31c1-4f2c-ae6a-2a8c271304e0" containerID="eed0229b4a5afa3d8607a1a3f6ea1c7d2fa3809ada4ae10a0b180d3b06ae5396" exitCode=0 Jan 10 06:59:26 crc kubenswrapper[4810]: I0110 06:59:26.342035 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fxt7n" event={"ID":"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0","Type":"ContainerDied","Data":"eed0229b4a5afa3d8607a1a3f6ea1c7d2fa3809ada4ae10a0b180d3b06ae5396"} Jan 10 06:59:27 crc kubenswrapper[4810]: I0110 06:59:27.351693 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fxt7n" event={"ID":"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0","Type":"ContainerStarted","Data":"bbaed85977b97503794e62c8f11c6d67ccbab1607aed92110186b605d4e6e863"} Jan 10 06:59:27 crc kubenswrapper[4810]: I0110 06:59:27.352045 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fxt7n" event={"ID":"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0","Type":"ContainerStarted","Data":"19034d37b81c9107739feb93a99edd80a260d650ef821c396c23315382f37562"} Jan 10 06:59:27 crc kubenswrapper[4810]: I0110 06:59:27.352060 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fxt7n" event={"ID":"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0","Type":"ContainerStarted","Data":"93920327c1c9cb2e2ffe08e7a679a38d067a903a4ed0f9a13d55dcc83ede4f46"} Jan 10 06:59:27 crc kubenswrapper[4810]: I0110 06:59:27.352072 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fxt7n" event={"ID":"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0","Type":"ContainerStarted","Data":"ef834b00782032360a68bf65e3d5fed7781aebae8ef647a0b7be140b1e81911b"} Jan 10 06:59:28 crc kubenswrapper[4810]: I0110 06:59:28.383631 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fxt7n" event={"ID":"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0","Type":"ContainerStarted","Data":"979003a74b276c5cd808ad1911beb91213d5b2f4abf8933a1721289ae7910cb8"} Jan 10 06:59:29 crc kubenswrapper[4810]: I0110 06:59:29.391625 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fxt7n" event={"ID":"ec952ca1-31c1-4f2c-ae6a-2a8c271304e0","Type":"ContainerStarted","Data":"caeb1fbfff50b78a82215184df62dc7a31f2e1232b3d073441d67cacba6dfb6b"} Jan 10 06:59:29 crc kubenswrapper[4810]: I0110 06:59:29.391893 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:29 crc kubenswrapper[4810]: I0110 06:59:29.425983 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fxt7n" podStartSLOduration=8.885520234 podStartE2EDuration="17.425969011s" podCreationTimestamp="2026-01-10 06:59:12 +0000 UTC" firstStartedPulling="2026-01-10 06:59:14.480845471 +0000 UTC m=+783.096338354" lastFinishedPulling="2026-01-10 06:59:23.021294208 +0000 UTC m=+791.636787131" observedRunningTime="2026-01-10 06:59:29.423122413 +0000 UTC m=+798.038615296" watchObservedRunningTime="2026-01-10 06:59:29.425969011 +0000 UTC m=+798.041461894" Jan 10 06:59:32 crc kubenswrapper[4810]: I0110 06:59:32.708781 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-sbjzl"] Jan 10 06:59:32 crc kubenswrapper[4810]: I0110 06:59:32.713865 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-sbjzl" Jan 10 06:59:32 crc kubenswrapper[4810]: I0110 06:59:32.716920 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 10 06:59:32 crc kubenswrapper[4810]: I0110 06:59:32.717626 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 10 06:59:32 crc kubenswrapper[4810]: I0110 06:59:32.718259 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-c4mrm" Jan 10 06:59:32 crc kubenswrapper[4810]: I0110 06:59:32.739377 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-sbjzl"] Jan 10 06:59:32 crc kubenswrapper[4810]: I0110 06:59:32.781620 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:32 crc kubenswrapper[4810]: I0110 06:59:32.835219 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld79j\" (UniqueName: \"kubernetes.io/projected/2aa984dd-a41f-4ee7-b8d6-dc837d555334-kube-api-access-ld79j\") pod \"mariadb-operator-index-sbjzl\" (UID: \"2aa984dd-a41f-4ee7-b8d6-dc837d555334\") " pod="openstack-operators/mariadb-operator-index-sbjzl" Jan 10 06:59:32 crc kubenswrapper[4810]: I0110 06:59:32.843961 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:32 crc kubenswrapper[4810]: I0110 06:59:32.936465 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld79j\" (UniqueName: \"kubernetes.io/projected/2aa984dd-a41f-4ee7-b8d6-dc837d555334-kube-api-access-ld79j\") pod \"mariadb-operator-index-sbjzl\" (UID: \"2aa984dd-a41f-4ee7-b8d6-dc837d555334\") " pod="openstack-operators/mariadb-operator-index-sbjzl" Jan 10 06:59:32 crc kubenswrapper[4810]: I0110 06:59:32.957854 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld79j\" (UniqueName: \"kubernetes.io/projected/2aa984dd-a41f-4ee7-b8d6-dc837d555334-kube-api-access-ld79j\") pod \"mariadb-operator-index-sbjzl\" (UID: \"2aa984dd-a41f-4ee7-b8d6-dc837d555334\") " pod="openstack-operators/mariadb-operator-index-sbjzl" Jan 10 06:59:33 crc kubenswrapper[4810]: I0110 06:59:33.051360 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-rhrms" Jan 10 06:59:33 crc kubenswrapper[4810]: I0110 06:59:33.058970 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-sbjzl" Jan 10 06:59:33 crc kubenswrapper[4810]: I0110 06:59:33.505023 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-sbjzl"] Jan 10 06:59:33 crc kubenswrapper[4810]: W0110 06:59:33.512424 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aa984dd_a41f_4ee7_b8d6_dc837d555334.slice/crio-0f9a8c11e884aebf4295f6b7f7c5a7c0dd9de9783a911cf9069c5da07c8f6ff2 WatchSource:0}: Error finding container 0f9a8c11e884aebf4295f6b7f7c5a7c0dd9de9783a911cf9069c5da07c8f6ff2: Status 404 returned error can't find the container with id 0f9a8c11e884aebf4295f6b7f7c5a7c0dd9de9783a911cf9069c5da07c8f6ff2 Jan 10 06:59:34 crc kubenswrapper[4810]: I0110 06:59:34.426749 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-sbjzl" event={"ID":"2aa984dd-a41f-4ee7-b8d6-dc837d555334","Type":"ContainerStarted","Data":"0f9a8c11e884aebf4295f6b7f7c5a7c0dd9de9783a911cf9069c5da07c8f6ff2"} Jan 10 06:59:36 crc kubenswrapper[4810]: I0110 06:59:36.085355 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-sbjzl"] Jan 10 06:59:36 crc kubenswrapper[4810]: I0110 06:59:36.443379 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-sbjzl" event={"ID":"2aa984dd-a41f-4ee7-b8d6-dc837d555334","Type":"ContainerStarted","Data":"e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38"} Jan 10 06:59:36 crc kubenswrapper[4810]: I0110 06:59:36.466687 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-sbjzl" podStartSLOduration=2.542321035 podStartE2EDuration="4.466651513s" podCreationTimestamp="2026-01-10 06:59:32 +0000 UTC" firstStartedPulling="2026-01-10 06:59:33.514805261 +0000 UTC m=+802.130298154" lastFinishedPulling="2026-01-10 06:59:35.439135709 +0000 UTC m=+804.054628632" observedRunningTime="2026-01-10 06:59:36.462359661 +0000 UTC m=+805.077852594" watchObservedRunningTime="2026-01-10 06:59:36.466651513 +0000 UTC m=+805.082144436" Jan 10 06:59:36 crc kubenswrapper[4810]: I0110 06:59:36.692316 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-2zn9d"] Jan 10 06:59:36 crc kubenswrapper[4810]: I0110 06:59:36.693014 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-2zn9d" Jan 10 06:59:36 crc kubenswrapper[4810]: I0110 06:59:36.724120 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-2zn9d"] Jan 10 06:59:36 crc kubenswrapper[4810]: I0110 06:59:36.788153 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk7l7\" (UniqueName: \"kubernetes.io/projected/a2647002-dfbd-4b5d-9732-f0cd8daa21ca-kube-api-access-pk7l7\") pod \"mariadb-operator-index-2zn9d\" (UID: \"a2647002-dfbd-4b5d-9732-f0cd8daa21ca\") " pod="openstack-operators/mariadb-operator-index-2zn9d" Jan 10 06:59:36 crc kubenswrapper[4810]: I0110 06:59:36.889908 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk7l7\" (UniqueName: \"kubernetes.io/projected/a2647002-dfbd-4b5d-9732-f0cd8daa21ca-kube-api-access-pk7l7\") pod \"mariadb-operator-index-2zn9d\" (UID: \"a2647002-dfbd-4b5d-9732-f0cd8daa21ca\") " pod="openstack-operators/mariadb-operator-index-2zn9d" Jan 10 06:59:36 crc kubenswrapper[4810]: I0110 06:59:36.910050 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk7l7\" (UniqueName: \"kubernetes.io/projected/a2647002-dfbd-4b5d-9732-f0cd8daa21ca-kube-api-access-pk7l7\") pod \"mariadb-operator-index-2zn9d\" (UID: \"a2647002-dfbd-4b5d-9732-f0cd8daa21ca\") " pod="openstack-operators/mariadb-operator-index-2zn9d" Jan 10 06:59:37 crc kubenswrapper[4810]: I0110 06:59:37.028567 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-2zn9d" Jan 10 06:59:37 crc kubenswrapper[4810]: I0110 06:59:37.448842 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-sbjzl" podUID="2aa984dd-a41f-4ee7-b8d6-dc837d555334" containerName="registry-server" containerID="cri-o://e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38" gracePeriod=2 Jan 10 06:59:37 crc kubenswrapper[4810]: I0110 06:59:37.489339 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-2zn9d"] Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.351447 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-sbjzl" Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.456752 4810 generic.go:334] "Generic (PLEG): container finished" podID="2aa984dd-a41f-4ee7-b8d6-dc837d555334" containerID="e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38" exitCode=0 Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.456796 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-sbjzl" Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.456823 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-sbjzl" event={"ID":"2aa984dd-a41f-4ee7-b8d6-dc837d555334","Type":"ContainerDied","Data":"e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38"} Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.457148 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-sbjzl" event={"ID":"2aa984dd-a41f-4ee7-b8d6-dc837d555334","Type":"ContainerDied","Data":"0f9a8c11e884aebf4295f6b7f7c5a7c0dd9de9783a911cf9069c5da07c8f6ff2"} Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.457260 4810 scope.go:117] "RemoveContainer" containerID="e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38" Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.458935 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-2zn9d" event={"ID":"a2647002-dfbd-4b5d-9732-f0cd8daa21ca","Type":"ContainerStarted","Data":"c14c78ab0f678fa97eb46b6c13a0dbab374a424c9ec51511777bb51b9b048826"} Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.458962 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-2zn9d" event={"ID":"a2647002-dfbd-4b5d-9732-f0cd8daa21ca","Type":"ContainerStarted","Data":"f24d738dcfccc887924fe7163f8f38067a045e2a3569e8f5218ab826fd16051c"} Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.474332 4810 scope.go:117] "RemoveContainer" containerID="e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38" Jan 10 06:59:38 crc kubenswrapper[4810]: E0110 06:59:38.475184 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38\": container with ID starting with e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38 not found: ID does not exist" containerID="e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38" Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.475280 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38"} err="failed to get container status \"e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38\": rpc error: code = NotFound desc = could not find container \"e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38\": container with ID starting with e235c2a1d8d88913600a3ff5d8152ffc4cafd5127d5c8a6d8d75c09fa4a18d38 not found: ID does not exist" Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.481049 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-2zn9d" podStartSLOduration=2.051359366 podStartE2EDuration="2.481028952s" podCreationTimestamp="2026-01-10 06:59:36 +0000 UTC" firstStartedPulling="2026-01-10 06:59:37.503844168 +0000 UTC m=+806.119337051" lastFinishedPulling="2026-01-10 06:59:37.933513744 +0000 UTC m=+806.549006637" observedRunningTime="2026-01-10 06:59:38.477915628 +0000 UTC m=+807.093408541" watchObservedRunningTime="2026-01-10 06:59:38.481028952 +0000 UTC m=+807.096521845" Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.513789 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld79j\" (UniqueName: \"kubernetes.io/projected/2aa984dd-a41f-4ee7-b8d6-dc837d555334-kube-api-access-ld79j\") pod \"2aa984dd-a41f-4ee7-b8d6-dc837d555334\" (UID: \"2aa984dd-a41f-4ee7-b8d6-dc837d555334\") " Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.519644 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa984dd-a41f-4ee7-b8d6-dc837d555334-kube-api-access-ld79j" (OuterVolumeSpecName: "kube-api-access-ld79j") pod "2aa984dd-a41f-4ee7-b8d6-dc837d555334" (UID: "2aa984dd-a41f-4ee7-b8d6-dc837d555334"). InnerVolumeSpecName "kube-api-access-ld79j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.615665 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld79j\" (UniqueName: \"kubernetes.io/projected/2aa984dd-a41f-4ee7-b8d6-dc837d555334-kube-api-access-ld79j\") on node \"crc\" DevicePath \"\"" Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.792845 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-sbjzl"] Jan 10 06:59:38 crc kubenswrapper[4810]: I0110 06:59:38.806891 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-sbjzl"] Jan 10 06:59:39 crc kubenswrapper[4810]: I0110 06:59:39.700082 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa984dd-a41f-4ee7-b8d6-dc837d555334" path="/var/lib/kubelet/pods/2aa984dd-a41f-4ee7-b8d6-dc837d555334/volumes" Jan 10 06:59:42 crc kubenswrapper[4810]: I0110 06:59:42.786008 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fxt7n" Jan 10 06:59:47 crc kubenswrapper[4810]: I0110 06:59:47.029357 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-2zn9d" Jan 10 06:59:47 crc kubenswrapper[4810]: I0110 06:59:47.029726 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-2zn9d" Jan 10 06:59:47 crc kubenswrapper[4810]: I0110 06:59:47.068515 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-2zn9d" Jan 10 06:59:47 crc kubenswrapper[4810]: I0110 06:59:47.576597 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-2zn9d" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.547532 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn"] Jan 10 06:59:48 crc kubenswrapper[4810]: E0110 06:59:48.547924 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa984dd-a41f-4ee7-b8d6-dc837d555334" containerName="registry-server" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.547945 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa984dd-a41f-4ee7-b8d6-dc837d555334" containerName="registry-server" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.548117 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa984dd-a41f-4ee7-b8d6-dc837d555334" containerName="registry-server" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.549523 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.553684 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-8bf9r" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.568107 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn"] Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.653461 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-util\") pod \"1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.653526 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvpkf\" (UniqueName: \"kubernetes.io/projected/0e522d77-9796-44d0-9c0c-6b92d799d4e8-kube-api-access-kvpkf\") pod \"1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.653666 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-bundle\") pod \"1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.755244 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvpkf\" (UniqueName: \"kubernetes.io/projected/0e522d77-9796-44d0-9c0c-6b92d799d4e8-kube-api-access-kvpkf\") pod \"1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.755301 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-util\") pod \"1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.755364 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-bundle\") pod \"1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.755779 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-util\") pod \"1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.755884 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-bundle\") pod \"1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.786236 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvpkf\" (UniqueName: \"kubernetes.io/projected/0e522d77-9796-44d0-9c0c-6b92d799d4e8-kube-api-access-kvpkf\") pod \"1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:48 crc kubenswrapper[4810]: I0110 06:59:48.871419 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:49 crc kubenswrapper[4810]: I0110 06:59:49.315252 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn"] Jan 10 06:59:49 crc kubenswrapper[4810]: W0110 06:59:49.324882 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e522d77_9796_44d0_9c0c_6b92d799d4e8.slice/crio-ffa44eef162467c909a16d1874d6eacb34cc367f06291a19334b14b0338ffb43 WatchSource:0}: Error finding container ffa44eef162467c909a16d1874d6eacb34cc367f06291a19334b14b0338ffb43: Status 404 returned error can't find the container with id ffa44eef162467c909a16d1874d6eacb34cc367f06291a19334b14b0338ffb43 Jan 10 06:59:49 crc kubenswrapper[4810]: I0110 06:59:49.566718 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" event={"ID":"0e522d77-9796-44d0-9c0c-6b92d799d4e8","Type":"ContainerStarted","Data":"ffa44eef162467c909a16d1874d6eacb34cc367f06291a19334b14b0338ffb43"} Jan 10 06:59:50 crc kubenswrapper[4810]: I0110 06:59:50.576941 4810 generic.go:334] "Generic (PLEG): container finished" podID="0e522d77-9796-44d0-9c0c-6b92d799d4e8" containerID="88641b5063d73adc388c7dc0924d1677e02fc2e6693fdb7d1ce3f48c056b15c6" exitCode=0 Jan 10 06:59:50 crc kubenswrapper[4810]: I0110 06:59:50.577178 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" event={"ID":"0e522d77-9796-44d0-9c0c-6b92d799d4e8","Type":"ContainerDied","Data":"88641b5063d73adc388c7dc0924d1677e02fc2e6693fdb7d1ce3f48c056b15c6"} Jan 10 06:59:50 crc kubenswrapper[4810]: I0110 06:59:50.882934 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 06:59:50 crc kubenswrapper[4810]: I0110 06:59:50.883019 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 06:59:54 crc kubenswrapper[4810]: I0110 06:59:54.610520 4810 generic.go:334] "Generic (PLEG): container finished" podID="0e522d77-9796-44d0-9c0c-6b92d799d4e8" containerID="d4c9794afdb002fea55f6652ae0a8ddcd6b5685a9b41bb05282cd914b6aca3e0" exitCode=0 Jan 10 06:59:54 crc kubenswrapper[4810]: I0110 06:59:54.611159 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" event={"ID":"0e522d77-9796-44d0-9c0c-6b92d799d4e8","Type":"ContainerDied","Data":"d4c9794afdb002fea55f6652ae0a8ddcd6b5685a9b41bb05282cd914b6aca3e0"} Jan 10 06:59:55 crc kubenswrapper[4810]: I0110 06:59:55.623181 4810 generic.go:334] "Generic (PLEG): container finished" podID="0e522d77-9796-44d0-9c0c-6b92d799d4e8" containerID="2cf2be50ae86bac9d79e09750fd51de373ac70cf5c1e9c18b8e50a69f7fa6358" exitCode=0 Jan 10 06:59:55 crc kubenswrapper[4810]: I0110 06:59:55.623318 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" event={"ID":"0e522d77-9796-44d0-9c0c-6b92d799d4e8","Type":"ContainerDied","Data":"2cf2be50ae86bac9d79e09750fd51de373ac70cf5c1e9c18b8e50a69f7fa6358"} Jan 10 06:59:56 crc kubenswrapper[4810]: I0110 06:59:56.973500 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.102157 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-bundle\") pod \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.102645 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-util\") pod \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.102781 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvpkf\" (UniqueName: \"kubernetes.io/projected/0e522d77-9796-44d0-9c0c-6b92d799d4e8-kube-api-access-kvpkf\") pod \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\" (UID: \"0e522d77-9796-44d0-9c0c-6b92d799d4e8\") " Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.104064 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-bundle" (OuterVolumeSpecName: "bundle") pod "0e522d77-9796-44d0-9c0c-6b92d799d4e8" (UID: "0e522d77-9796-44d0-9c0c-6b92d799d4e8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.111047 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e522d77-9796-44d0-9c0c-6b92d799d4e8-kube-api-access-kvpkf" (OuterVolumeSpecName: "kube-api-access-kvpkf") pod "0e522d77-9796-44d0-9c0c-6b92d799d4e8" (UID: "0e522d77-9796-44d0-9c0c-6b92d799d4e8"). InnerVolumeSpecName "kube-api-access-kvpkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.123673 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-util" (OuterVolumeSpecName: "util") pod "0e522d77-9796-44d0-9c0c-6b92d799d4e8" (UID: "0e522d77-9796-44d0-9c0c-6b92d799d4e8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.204542 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvpkf\" (UniqueName: \"kubernetes.io/projected/0e522d77-9796-44d0-9c0c-6b92d799d4e8-kube-api-access-kvpkf\") on node \"crc\" DevicePath \"\"" Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.204593 4810 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.204616 4810 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e522d77-9796-44d0-9c0c-6b92d799d4e8-util\") on node \"crc\" DevicePath \"\"" Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.642820 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" event={"ID":"0e522d77-9796-44d0-9c0c-6b92d799d4e8","Type":"ContainerDied","Data":"ffa44eef162467c909a16d1874d6eacb34cc367f06291a19334b14b0338ffb43"} Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.642874 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffa44eef162467c909a16d1874d6eacb34cc367f06291a19334b14b0338ffb43" Jan 10 06:59:57 crc kubenswrapper[4810]: I0110 06:59:57.642926 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.222504 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk"] Jan 10 07:00:00 crc kubenswrapper[4810]: E0110 07:00:00.223058 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e522d77-9796-44d0-9c0c-6b92d799d4e8" containerName="util" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.223075 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e522d77-9796-44d0-9c0c-6b92d799d4e8" containerName="util" Jan 10 07:00:00 crc kubenswrapper[4810]: E0110 07:00:00.223084 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e522d77-9796-44d0-9c0c-6b92d799d4e8" containerName="extract" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.223093 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e522d77-9796-44d0-9c0c-6b92d799d4e8" containerName="extract" Jan 10 07:00:00 crc kubenswrapper[4810]: E0110 07:00:00.223110 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e522d77-9796-44d0-9c0c-6b92d799d4e8" containerName="pull" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.223118 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e522d77-9796-44d0-9c0c-6b92d799d4e8" containerName="pull" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.223266 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e522d77-9796-44d0-9c0c-6b92d799d4e8" containerName="extract" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.223722 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.226645 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.226810 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.242444 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk"] Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.344120 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4587e1-c827-4a87-961c-bbeddee03974-secret-volume\") pod \"collect-profiles-29467140-4htfk\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.344223 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnr78\" (UniqueName: \"kubernetes.io/projected/3b4587e1-c827-4a87-961c-bbeddee03974-kube-api-access-nnr78\") pod \"collect-profiles-29467140-4htfk\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.344387 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4587e1-c827-4a87-961c-bbeddee03974-config-volume\") pod \"collect-profiles-29467140-4htfk\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.445230 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnr78\" (UniqueName: \"kubernetes.io/projected/3b4587e1-c827-4a87-961c-bbeddee03974-kube-api-access-nnr78\") pod \"collect-profiles-29467140-4htfk\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.445284 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4587e1-c827-4a87-961c-bbeddee03974-config-volume\") pod \"collect-profiles-29467140-4htfk\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.445358 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4587e1-c827-4a87-961c-bbeddee03974-secret-volume\") pod \"collect-profiles-29467140-4htfk\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.446335 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4587e1-c827-4a87-961c-bbeddee03974-config-volume\") pod \"collect-profiles-29467140-4htfk\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.449546 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4587e1-c827-4a87-961c-bbeddee03974-secret-volume\") pod \"collect-profiles-29467140-4htfk\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.465396 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnr78\" (UniqueName: \"kubernetes.io/projected/3b4587e1-c827-4a87-961c-bbeddee03974-kube-api-access-nnr78\") pod \"collect-profiles-29467140-4htfk\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.540499 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:00 crc kubenswrapper[4810]: I0110 07:00:00.745573 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk"] Jan 10 07:00:00 crc kubenswrapper[4810]: W0110 07:00:00.750455 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b4587e1_c827_4a87_961c_bbeddee03974.slice/crio-d23946d60e81bda66273a726fa3686a2ce4f5c1b078d1a6711037a169791a740 WatchSource:0}: Error finding container d23946d60e81bda66273a726fa3686a2ce4f5c1b078d1a6711037a169791a740: Status 404 returned error can't find the container with id d23946d60e81bda66273a726fa3686a2ce4f5c1b078d1a6711037a169791a740 Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.673664 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" event={"ID":"3b4587e1-c827-4a87-961c-bbeddee03974","Type":"ContainerStarted","Data":"ebfbd8f84ee07150dc1c49f958d0da93f2c25e2bbe756f9b563f64a2e224a447"} Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.674108 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" event={"ID":"3b4587e1-c827-4a87-961c-bbeddee03974","Type":"ContainerStarted","Data":"d23946d60e81bda66273a726fa3686a2ce4f5c1b078d1a6711037a169791a740"} Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.694626 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" podStartSLOduration=1.694602181 podStartE2EDuration="1.694602181s" podCreationTimestamp="2026-01-10 07:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:00:01.688345372 +0000 UTC m=+830.303838285" watchObservedRunningTime="2026-01-10 07:00:01.694602181 +0000 UTC m=+830.310095084" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.713439 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz"] Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.714088 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.715736 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-f8lp9" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.715794 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.717104 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.727064 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz"] Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.863060 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-apiservice-cert\") pod \"mariadb-operator-controller-manager-6946567f8-tf7nz\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.863115 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pdjc\" (UniqueName: \"kubernetes.io/projected/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-kube-api-access-9pdjc\") pod \"mariadb-operator-controller-manager-6946567f8-tf7nz\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.863318 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-webhook-cert\") pod \"mariadb-operator-controller-manager-6946567f8-tf7nz\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.965298 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-webhook-cert\") pod \"mariadb-operator-controller-manager-6946567f8-tf7nz\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.965886 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-apiservice-cert\") pod \"mariadb-operator-controller-manager-6946567f8-tf7nz\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.965953 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pdjc\" (UniqueName: \"kubernetes.io/projected/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-kube-api-access-9pdjc\") pod \"mariadb-operator-controller-manager-6946567f8-tf7nz\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.971601 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-webhook-cert\") pod \"mariadb-operator-controller-manager-6946567f8-tf7nz\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.972540 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-apiservice-cert\") pod \"mariadb-operator-controller-manager-6946567f8-tf7nz\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:01 crc kubenswrapper[4810]: I0110 07:00:01.990259 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pdjc\" (UniqueName: \"kubernetes.io/projected/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-kube-api-access-9pdjc\") pod \"mariadb-operator-controller-manager-6946567f8-tf7nz\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:02 crc kubenswrapper[4810]: I0110 07:00:02.068706 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:02 crc kubenswrapper[4810]: I0110 07:00:02.261809 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz"] Jan 10 07:00:02 crc kubenswrapper[4810]: W0110 07:00:02.274411 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2cb1aea_83f9_4f30_8b56_f6726a9c8f81.slice/crio-04b276871a0262ffa8fba26ce2df85ca17c709553ab1141dfb0e8281eacbe32a WatchSource:0}: Error finding container 04b276871a0262ffa8fba26ce2df85ca17c709553ab1141dfb0e8281eacbe32a: Status 404 returned error can't find the container with id 04b276871a0262ffa8fba26ce2df85ca17c709553ab1141dfb0e8281eacbe32a Jan 10 07:00:02 crc kubenswrapper[4810]: I0110 07:00:02.687867 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" event={"ID":"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81","Type":"ContainerStarted","Data":"04b276871a0262ffa8fba26ce2df85ca17c709553ab1141dfb0e8281eacbe32a"} Jan 10 07:00:02 crc kubenswrapper[4810]: I0110 07:00:02.690122 4810 generic.go:334] "Generic (PLEG): container finished" podID="3b4587e1-c827-4a87-961c-bbeddee03974" containerID="ebfbd8f84ee07150dc1c49f958d0da93f2c25e2bbe756f9b563f64a2e224a447" exitCode=0 Jan 10 07:00:02 crc kubenswrapper[4810]: I0110 07:00:02.690270 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" event={"ID":"3b4587e1-c827-4a87-961c-bbeddee03974","Type":"ContainerDied","Data":"ebfbd8f84ee07150dc1c49f958d0da93f2c25e2bbe756f9b563f64a2e224a447"} Jan 10 07:00:03 crc kubenswrapper[4810]: I0110 07:00:03.915928 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.096627 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4587e1-c827-4a87-961c-bbeddee03974-config-volume\") pod \"3b4587e1-c827-4a87-961c-bbeddee03974\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.096686 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnr78\" (UniqueName: \"kubernetes.io/projected/3b4587e1-c827-4a87-961c-bbeddee03974-kube-api-access-nnr78\") pod \"3b4587e1-c827-4a87-961c-bbeddee03974\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.096740 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4587e1-c827-4a87-961c-bbeddee03974-secret-volume\") pod \"3b4587e1-c827-4a87-961c-bbeddee03974\" (UID: \"3b4587e1-c827-4a87-961c-bbeddee03974\") " Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.098010 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b4587e1-c827-4a87-961c-bbeddee03974-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b4587e1-c827-4a87-961c-bbeddee03974" (UID: "3b4587e1-c827-4a87-961c-bbeddee03974"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.973969 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4587e1-c827-4a87-961c-bbeddee03974-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3b4587e1-c827-4a87-961c-bbeddee03974" (UID: "3b4587e1-c827-4a87-961c-bbeddee03974"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.974511 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4587e1-c827-4a87-961c-bbeddee03974-kube-api-access-nnr78" (OuterVolumeSpecName: "kube-api-access-nnr78") pod "3b4587e1-c827-4a87-961c-bbeddee03974" (UID: "3b4587e1-c827-4a87-961c-bbeddee03974"). InnerVolumeSpecName "kube-api-access-nnr78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.975593 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4587e1-c827-4a87-961c-bbeddee03974-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.975631 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4587e1-c827-4a87-961c-bbeddee03974-config-volume\") on node \"crc\" DevicePath \"\"" Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.975828 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnr78\" (UniqueName: \"kubernetes.io/projected/3b4587e1-c827-4a87-961c-bbeddee03974-kube-api-access-nnr78\") on node \"crc\" DevicePath \"\"" Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.991640 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" event={"ID":"3b4587e1-c827-4a87-961c-bbeddee03974","Type":"ContainerDied","Data":"d23946d60e81bda66273a726fa3686a2ce4f5c1b078d1a6711037a169791a740"} Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.991901 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d23946d60e81bda66273a726fa3686a2ce4f5c1b078d1a6711037a169791a740" Jan 10 07:00:04 crc kubenswrapper[4810]: I0110 07:00:04.991948 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467140-4htfk" Jan 10 07:00:12 crc kubenswrapper[4810]: I0110 07:00:12.049328 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" event={"ID":"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81","Type":"ContainerStarted","Data":"a1c0bc8361d3b8247aa4cf1ad2df8ec6b6ffca072bed0d159642580cc3f78648"} Jan 10 07:00:12 crc kubenswrapper[4810]: I0110 07:00:12.049873 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:12 crc kubenswrapper[4810]: I0110 07:00:12.063871 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" podStartSLOduration=1.99295457 podStartE2EDuration="11.063853748s" podCreationTimestamp="2026-01-10 07:00:01 +0000 UTC" firstStartedPulling="2026-01-10 07:00:02.279133413 +0000 UTC m=+830.894626286" lastFinishedPulling="2026-01-10 07:00:11.350032561 +0000 UTC m=+839.965525464" observedRunningTime="2026-01-10 07:00:12.062951587 +0000 UTC m=+840.678444490" watchObservedRunningTime="2026-01-10 07:00:12.063853748 +0000 UTC m=+840.679346631" Jan 10 07:00:20 crc kubenswrapper[4810]: I0110 07:00:20.883461 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:00:20 crc kubenswrapper[4810]: I0110 07:00:20.884131 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:00:20 crc kubenswrapper[4810]: I0110 07:00:20.884237 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 07:00:20 crc kubenswrapper[4810]: I0110 07:00:20.885329 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15f41bc02b9f771423fbf604f7e443b9696eeab6c3ab3a52722c60140661e2b7"} pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 10 07:00:20 crc kubenswrapper[4810]: I0110 07:00:20.885439 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" containerID="cri-o://15f41bc02b9f771423fbf604f7e443b9696eeab6c3ab3a52722c60140661e2b7" gracePeriod=600 Jan 10 07:00:22 crc kubenswrapper[4810]: I0110 07:00:22.074930 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.133181 4810 generic.go:334] "Generic (PLEG): container finished" podID="b5b79429-9259-412f-bab8-27865ab7029b" containerID="15f41bc02b9f771423fbf604f7e443b9696eeab6c3ab3a52722c60140661e2b7" exitCode=0 Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.133257 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerDied","Data":"15f41bc02b9f771423fbf604f7e443b9696eeab6c3ab3a52722c60140661e2b7"} Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.133666 4810 scope.go:117] "RemoveContainer" containerID="df6bb1628e4de8d40f9cf46b93e9cff42db4028c710f0012c0d053532cc05941" Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.725539 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-q72vt"] Jan 10 07:00:24 crc kubenswrapper[4810]: E0110 07:00:24.726019 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4587e1-c827-4a87-961c-bbeddee03974" containerName="collect-profiles" Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.726032 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4587e1-c827-4a87-961c-bbeddee03974" containerName="collect-profiles" Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.726150 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4587e1-c827-4a87-961c-bbeddee03974" containerName="collect-profiles" Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.726630 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-q72vt" Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.729367 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-t9bfs" Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.750318 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-q72vt"] Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.855277 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j9sz\" (UniqueName: \"kubernetes.io/projected/042c76ca-e9b9-4545-b0a2-23ade5e07e51-kube-api-access-5j9sz\") pod \"infra-operator-index-q72vt\" (UID: \"042c76ca-e9b9-4545-b0a2-23ade5e07e51\") " pod="openstack-operators/infra-operator-index-q72vt" Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.956503 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j9sz\" (UniqueName: \"kubernetes.io/projected/042c76ca-e9b9-4545-b0a2-23ade5e07e51-kube-api-access-5j9sz\") pod \"infra-operator-index-q72vt\" (UID: \"042c76ca-e9b9-4545-b0a2-23ade5e07e51\") " pod="openstack-operators/infra-operator-index-q72vt" Jan 10 07:00:24 crc kubenswrapper[4810]: I0110 07:00:24.974249 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j9sz\" (UniqueName: \"kubernetes.io/projected/042c76ca-e9b9-4545-b0a2-23ade5e07e51-kube-api-access-5j9sz\") pod \"infra-operator-index-q72vt\" (UID: \"042c76ca-e9b9-4545-b0a2-23ade5e07e51\") " pod="openstack-operators/infra-operator-index-q72vt" Jan 10 07:00:25 crc kubenswrapper[4810]: I0110 07:00:25.046233 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-q72vt" Jan 10 07:00:25 crc kubenswrapper[4810]: I0110 07:00:25.521514 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-q72vt"] Jan 10 07:00:26 crc kubenswrapper[4810]: I0110 07:00:26.152977 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-q72vt" event={"ID":"042c76ca-e9b9-4545-b0a2-23ade5e07e51","Type":"ContainerStarted","Data":"7b8a5ef5a5601e1cb389db1b5a0bbcad0f66027366e301175de6980108b1c146"} Jan 10 07:00:27 crc kubenswrapper[4810]: I0110 07:00:27.164430 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"20d4ffbd9b363df7f1755c8df4cc04082344a1639cdb27e352856dec031d294d"} Jan 10 07:00:27 crc kubenswrapper[4810]: I0110 07:00:27.702310 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-q72vt"] Jan 10 07:00:28 crc kubenswrapper[4810]: I0110 07:00:28.265167 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-r22zc"] Jan 10 07:00:28 crc kubenswrapper[4810]: I0110 07:00:28.267038 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-r22zc" Jan 10 07:00:28 crc kubenswrapper[4810]: I0110 07:00:28.277949 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-r22zc"] Jan 10 07:00:28 crc kubenswrapper[4810]: I0110 07:00:28.412142 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg87j\" (UniqueName: \"kubernetes.io/projected/f543013c-a720-406d-be72-86f2fd11d8a7-kube-api-access-pg87j\") pod \"infra-operator-index-r22zc\" (UID: \"f543013c-a720-406d-be72-86f2fd11d8a7\") " pod="openstack-operators/infra-operator-index-r22zc" Jan 10 07:00:28 crc kubenswrapper[4810]: I0110 07:00:28.513677 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg87j\" (UniqueName: \"kubernetes.io/projected/f543013c-a720-406d-be72-86f2fd11d8a7-kube-api-access-pg87j\") pod \"infra-operator-index-r22zc\" (UID: \"f543013c-a720-406d-be72-86f2fd11d8a7\") " pod="openstack-operators/infra-operator-index-r22zc" Jan 10 07:00:28 crc kubenswrapper[4810]: I0110 07:00:28.537277 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg87j\" (UniqueName: \"kubernetes.io/projected/f543013c-a720-406d-be72-86f2fd11d8a7-kube-api-access-pg87j\") pod \"infra-operator-index-r22zc\" (UID: \"f543013c-a720-406d-be72-86f2fd11d8a7\") " pod="openstack-operators/infra-operator-index-r22zc" Jan 10 07:00:28 crc kubenswrapper[4810]: I0110 07:00:28.604314 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-r22zc" Jan 10 07:00:29 crc kubenswrapper[4810]: I0110 07:00:29.046952 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-r22zc"] Jan 10 07:00:29 crc kubenswrapper[4810]: I0110 07:00:29.174805 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-r22zc" event={"ID":"f543013c-a720-406d-be72-86f2fd11d8a7","Type":"ContainerStarted","Data":"7f25147094ceaa7de8df9089d52796255a589c5e1b0bc28253cc1f69a0119041"} Jan 10 07:00:29 crc kubenswrapper[4810]: I0110 07:00:29.176369 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-q72vt" event={"ID":"042c76ca-e9b9-4545-b0a2-23ade5e07e51","Type":"ContainerStarted","Data":"c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406"} Jan 10 07:00:29 crc kubenswrapper[4810]: I0110 07:00:29.176480 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-q72vt" podUID="042c76ca-e9b9-4545-b0a2-23ade5e07e51" containerName="registry-server" containerID="cri-o://c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406" gracePeriod=2 Jan 10 07:00:29 crc kubenswrapper[4810]: I0110 07:00:29.193368 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-q72vt" podStartSLOduration=2.766156346 podStartE2EDuration="5.193352434s" podCreationTimestamp="2026-01-10 07:00:24 +0000 UTC" firstStartedPulling="2026-01-10 07:00:25.538027237 +0000 UTC m=+854.153520120" lastFinishedPulling="2026-01-10 07:00:27.965223315 +0000 UTC m=+856.580716208" observedRunningTime="2026-01-10 07:00:29.190708552 +0000 UTC m=+857.806201445" watchObservedRunningTime="2026-01-10 07:00:29.193352434 +0000 UTC m=+857.808845317" Jan 10 07:00:29 crc kubenswrapper[4810]: I0110 07:00:29.545872 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-q72vt" Jan 10 07:00:29 crc kubenswrapper[4810]: I0110 07:00:29.731945 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j9sz\" (UniqueName: \"kubernetes.io/projected/042c76ca-e9b9-4545-b0a2-23ade5e07e51-kube-api-access-5j9sz\") pod \"042c76ca-e9b9-4545-b0a2-23ade5e07e51\" (UID: \"042c76ca-e9b9-4545-b0a2-23ade5e07e51\") " Jan 10 07:00:29 crc kubenswrapper[4810]: I0110 07:00:29.739917 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042c76ca-e9b9-4545-b0a2-23ade5e07e51-kube-api-access-5j9sz" (OuterVolumeSpecName: "kube-api-access-5j9sz") pod "042c76ca-e9b9-4545-b0a2-23ade5e07e51" (UID: "042c76ca-e9b9-4545-b0a2-23ade5e07e51"). InnerVolumeSpecName "kube-api-access-5j9sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:00:29 crc kubenswrapper[4810]: I0110 07:00:29.833640 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j9sz\" (UniqueName: \"kubernetes.io/projected/042c76ca-e9b9-4545-b0a2-23ade5e07e51-kube-api-access-5j9sz\") on node \"crc\" DevicePath \"\"" Jan 10 07:00:30 crc kubenswrapper[4810]: I0110 07:00:30.183858 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-r22zc" event={"ID":"f543013c-a720-406d-be72-86f2fd11d8a7","Type":"ContainerStarted","Data":"67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7"} Jan 10 07:00:30 crc kubenswrapper[4810]: I0110 07:00:30.186248 4810 generic.go:334] "Generic (PLEG): container finished" podID="042c76ca-e9b9-4545-b0a2-23ade5e07e51" containerID="c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406" exitCode=0 Jan 10 07:00:30 crc kubenswrapper[4810]: I0110 07:00:30.186290 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-q72vt" event={"ID":"042c76ca-e9b9-4545-b0a2-23ade5e07e51","Type":"ContainerDied","Data":"c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406"} Jan 10 07:00:30 crc kubenswrapper[4810]: I0110 07:00:30.186316 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-q72vt" event={"ID":"042c76ca-e9b9-4545-b0a2-23ade5e07e51","Type":"ContainerDied","Data":"7b8a5ef5a5601e1cb389db1b5a0bbcad0f66027366e301175de6980108b1c146"} Jan 10 07:00:30 crc kubenswrapper[4810]: I0110 07:00:30.186335 4810 scope.go:117] "RemoveContainer" containerID="c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406" Jan 10 07:00:30 crc kubenswrapper[4810]: I0110 07:00:30.186371 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-q72vt" Jan 10 07:00:30 crc kubenswrapper[4810]: I0110 07:00:30.200580 4810 scope.go:117] "RemoveContainer" containerID="c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406" Jan 10 07:00:30 crc kubenswrapper[4810]: E0110 07:00:30.201473 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406\": container with ID starting with c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406 not found: ID does not exist" containerID="c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406" Jan 10 07:00:30 crc kubenswrapper[4810]: I0110 07:00:30.201520 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406"} err="failed to get container status \"c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406\": rpc error: code = NotFound desc = could not find container \"c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406\": container with ID starting with c39b8890633bda9b420583b6ab64373ad4a3023eae16eac43098f8ef3cf53406 not found: ID does not exist" Jan 10 07:00:30 crc kubenswrapper[4810]: I0110 07:00:30.204469 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-r22zc" podStartSLOduration=1.699099798 podStartE2EDuration="2.204425529s" podCreationTimestamp="2026-01-10 07:00:28 +0000 UTC" firstStartedPulling="2026-01-10 07:00:29.06475124 +0000 UTC m=+857.680244143" lastFinishedPulling="2026-01-10 07:00:29.570076991 +0000 UTC m=+858.185569874" observedRunningTime="2026-01-10 07:00:30.200933425 +0000 UTC m=+858.816426358" watchObservedRunningTime="2026-01-10 07:00:30.204425529 +0000 UTC m=+858.819918422" Jan 10 07:00:30 crc kubenswrapper[4810]: I0110 07:00:30.222875 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-q72vt"] Jan 10 07:00:30 crc kubenswrapper[4810]: I0110 07:00:30.229795 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-q72vt"] Jan 10 07:00:31 crc kubenswrapper[4810]: I0110 07:00:31.710945 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042c76ca-e9b9-4545-b0a2-23ade5e07e51" path="/var/lib/kubelet/pods/042c76ca-e9b9-4545-b0a2-23ade5e07e51/volumes" Jan 10 07:00:38 crc kubenswrapper[4810]: I0110 07:00:38.605433 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-r22zc" Jan 10 07:00:38 crc kubenswrapper[4810]: I0110 07:00:38.606101 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-r22zc" Jan 10 07:00:38 crc kubenswrapper[4810]: I0110 07:00:38.648664 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-r22zc" Jan 10 07:00:39 crc kubenswrapper[4810]: I0110 07:00:39.302577 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-r22zc" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.118487 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8"] Jan 10 07:00:45 crc kubenswrapper[4810]: E0110 07:00:45.119369 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042c76ca-e9b9-4545-b0a2-23ade5e07e51" containerName="registry-server" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.119392 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="042c76ca-e9b9-4545-b0a2-23ade5e07e51" containerName="registry-server" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.119575 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="042c76ca-e9b9-4545-b0a2-23ade5e07e51" containerName="registry-server" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.120958 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.127295 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-8bf9r" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.135243 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8"] Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.248218 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-util\") pod \"1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.248416 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-bundle\") pod \"1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.248503 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n5b2\" (UniqueName: \"kubernetes.io/projected/a08fc417-fb93-4e50-a805-d13909763e26-kube-api-access-2n5b2\") pod \"1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.349851 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-bundle\") pod \"1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.349931 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n5b2\" (UniqueName: \"kubernetes.io/projected/a08fc417-fb93-4e50-a805-d13909763e26-kube-api-access-2n5b2\") pod \"1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.350038 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-util\") pod \"1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.350803 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-bundle\") pod \"1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.350837 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-util\") pod \"1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.390028 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n5b2\" (UniqueName: \"kubernetes.io/projected/a08fc417-fb93-4e50-a805-d13909763e26-kube-api-access-2n5b2\") pod \"1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.438239 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:45 crc kubenswrapper[4810]: I0110 07:00:45.734453 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8"] Jan 10 07:00:46 crc kubenswrapper[4810]: I0110 07:00:46.318448 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" event={"ID":"a08fc417-fb93-4e50-a805-d13909763e26","Type":"ContainerStarted","Data":"4d9b3873a8423d6bdb65cf6bfbd23c033a06fdfe7b5f7477ea2519ec9e6935bf"} Jan 10 07:00:47 crc kubenswrapper[4810]: I0110 07:00:47.330058 4810 generic.go:334] "Generic (PLEG): container finished" podID="a08fc417-fb93-4e50-a805-d13909763e26" containerID="9604d779e2b0ad8c4a1a3a5a75a2c67b5feb3e69dd43eb119786582bced25621" exitCode=0 Jan 10 07:00:47 crc kubenswrapper[4810]: I0110 07:00:47.330632 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" event={"ID":"a08fc417-fb93-4e50-a805-d13909763e26","Type":"ContainerDied","Data":"9604d779e2b0ad8c4a1a3a5a75a2c67b5feb3e69dd43eb119786582bced25621"} Jan 10 07:00:50 crc kubenswrapper[4810]: I0110 07:00:50.353724 4810 generic.go:334] "Generic (PLEG): container finished" podID="a08fc417-fb93-4e50-a805-d13909763e26" containerID="d8775154c2e29f0bc5e96edfcc5fb97b8bc79e716cccda84bb18de1f52da1a61" exitCode=0 Jan 10 07:00:50 crc kubenswrapper[4810]: I0110 07:00:50.353948 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" event={"ID":"a08fc417-fb93-4e50-a805-d13909763e26","Type":"ContainerDied","Data":"d8775154c2e29f0bc5e96edfcc5fb97b8bc79e716cccda84bb18de1f52da1a61"} Jan 10 07:00:51 crc kubenswrapper[4810]: I0110 07:00:51.375095 4810 generic.go:334] "Generic (PLEG): container finished" podID="a08fc417-fb93-4e50-a805-d13909763e26" containerID="9c747283e409513af3b7c0f43c232aacaefe07e7d75b8d9de038f14bbfc8adf1" exitCode=0 Jan 10 07:00:51 crc kubenswrapper[4810]: I0110 07:00:51.375177 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" event={"ID":"a08fc417-fb93-4e50-a805-d13909763e26","Type":"ContainerDied","Data":"9c747283e409513af3b7c0f43c232aacaefe07e7d75b8d9de038f14bbfc8adf1"} Jan 10 07:00:52 crc kubenswrapper[4810]: I0110 07:00:52.652921 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:52.762666 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n5b2\" (UniqueName: \"kubernetes.io/projected/a08fc417-fb93-4e50-a805-d13909763e26-kube-api-access-2n5b2\") pod \"a08fc417-fb93-4e50-a805-d13909763e26\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:52.762769 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-util\") pod \"a08fc417-fb93-4e50-a805-d13909763e26\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:52.762836 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-bundle\") pod \"a08fc417-fb93-4e50-a805-d13909763e26\" (UID: \"a08fc417-fb93-4e50-a805-d13909763e26\") " Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:52.765178 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-bundle" (OuterVolumeSpecName: "bundle") pod "a08fc417-fb93-4e50-a805-d13909763e26" (UID: "a08fc417-fb93-4e50-a805-d13909763e26"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:52.768553 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a08fc417-fb93-4e50-a805-d13909763e26-kube-api-access-2n5b2" (OuterVolumeSpecName: "kube-api-access-2n5b2") pod "a08fc417-fb93-4e50-a805-d13909763e26" (UID: "a08fc417-fb93-4e50-a805-d13909763e26"). InnerVolumeSpecName "kube-api-access-2n5b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:52.777445 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-util" (OuterVolumeSpecName: "util") pod "a08fc417-fb93-4e50-a805-d13909763e26" (UID: "a08fc417-fb93-4e50-a805-d13909763e26"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:52.864679 4810 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:52.864701 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2n5b2\" (UniqueName: \"kubernetes.io/projected/a08fc417-fb93-4e50-a805-d13909763e26-kube-api-access-2n5b2\") on node \"crc\" DevicePath \"\"" Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:52.864712 4810 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a08fc417-fb93-4e50-a805-d13909763e26-util\") on node \"crc\" DevicePath \"\"" Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:53.390625 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" event={"ID":"a08fc417-fb93-4e50-a805-d13909763e26","Type":"ContainerDied","Data":"4d9b3873a8423d6bdb65cf6bfbd23c033a06fdfe7b5f7477ea2519ec9e6935bf"} Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:53.390671 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d9b3873a8423d6bdb65cf6bfbd23c033a06fdfe7b5f7477ea2519ec9e6935bf" Jan 10 07:00:54 crc kubenswrapper[4810]: I0110 07:00:53.390792 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8" Jan 10 07:00:54 crc kubenswrapper[4810]: E0110 07:00:54.731706 4810 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.039s" Jan 10 07:01:02 crc kubenswrapper[4810]: I0110 07:01:02.910723 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9"] Jan 10 07:01:02 crc kubenswrapper[4810]: E0110 07:01:02.911513 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08fc417-fb93-4e50-a805-d13909763e26" containerName="extract" Jan 10 07:01:02 crc kubenswrapper[4810]: I0110 07:01:02.911529 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08fc417-fb93-4e50-a805-d13909763e26" containerName="extract" Jan 10 07:01:02 crc kubenswrapper[4810]: E0110 07:01:02.911553 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08fc417-fb93-4e50-a805-d13909763e26" containerName="pull" Jan 10 07:01:02 crc kubenswrapper[4810]: I0110 07:01:02.911560 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08fc417-fb93-4e50-a805-d13909763e26" containerName="pull" Jan 10 07:01:02 crc kubenswrapper[4810]: E0110 07:01:02.911581 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a08fc417-fb93-4e50-a805-d13909763e26" containerName="util" Jan 10 07:01:02 crc kubenswrapper[4810]: I0110 07:01:02.911589 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a08fc417-fb93-4e50-a805-d13909763e26" containerName="util" Jan 10 07:01:02 crc kubenswrapper[4810]: I0110 07:01:02.911731 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a08fc417-fb93-4e50-a805-d13909763e26" containerName="extract" Jan 10 07:01:02 crc kubenswrapper[4810]: I0110 07:01:02.912209 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:02 crc kubenswrapper[4810]: I0110 07:01:02.915506 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zmkd9" Jan 10 07:01:02 crc kubenswrapper[4810]: I0110 07:01:02.916838 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 10 07:01:02 crc kubenswrapper[4810]: I0110 07:01:02.939671 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9"] Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.021764 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-apiservice-cert\") pod \"infra-operator-controller-manager-68f4fb9846-lvtr9\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.022091 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lw7j\" (UniqueName: \"kubernetes.io/projected/da111c6b-f079-4b6f-8bab-421f512d92f3-kube-api-access-9lw7j\") pod \"infra-operator-controller-manager-68f4fb9846-lvtr9\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.022129 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-webhook-cert\") pod \"infra-operator-controller-manager-68f4fb9846-lvtr9\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.122816 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-webhook-cert\") pod \"infra-operator-controller-manager-68f4fb9846-lvtr9\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.122913 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-apiservice-cert\") pod \"infra-operator-controller-manager-68f4fb9846-lvtr9\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.122980 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lw7j\" (UniqueName: \"kubernetes.io/projected/da111c6b-f079-4b6f-8bab-421f512d92f3-kube-api-access-9lw7j\") pod \"infra-operator-controller-manager-68f4fb9846-lvtr9\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.129084 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-webhook-cert\") pod \"infra-operator-controller-manager-68f4fb9846-lvtr9\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.129113 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-apiservice-cert\") pod \"infra-operator-controller-manager-68f4fb9846-lvtr9\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.143622 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.144964 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.147891 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openshift-service-ca.crt" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.148527 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-config-data" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.148812 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"openstack-scripts" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.148958 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"galera-openstack-dockercfg-pgfs7" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.149237 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"kube-root-ca.crt" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.160551 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.161629 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.163952 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lw7j\" (UniqueName: \"kubernetes.io/projected/da111c6b-f079-4b6f-8bab-421f512d92f3-kube-api-access-9lw7j\") pod \"infra-operator-controller-manager-68f4fb9846-lvtr9\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.172273 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.178837 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.180010 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.185089 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.195905 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224732 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-kolla-config\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224773 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-kolla-config\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224792 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk8tl\" (UniqueName: \"kubernetes.io/projected/692848fd-4cf4-401e-819f-c14ac900efea-kube-api-access-nk8tl\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224815 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-generated\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224831 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/692848fd-4cf4-401e-819f-c14ac900efea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224846 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-default\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224878 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-operator-scripts\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224892 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224914 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224929 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-config-data-default\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224944 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.224962 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s286v\" (UniqueName: \"kubernetes.io/projected/cb0387ec-3523-41da-b323-7249eb242b4d-kube-api-access-s286v\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.232740 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.325932 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-generated\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.325987 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5jk2\" (UniqueName: \"kubernetes.io/projected/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kube-api-access-r5jk2\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326027 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-operator-scripts\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326119 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326170 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326260 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326294 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-config-data-default\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326324 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326367 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s286v\" (UniqueName: \"kubernetes.io/projected/cb0387ec-3523-41da-b323-7249eb242b4d-kube-api-access-s286v\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326475 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-operator-scripts\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326497 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-kolla-config\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326558 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-kolla-config\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326621 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk8tl\" (UniqueName: \"kubernetes.io/projected/692848fd-4cf4-401e-819f-c14ac900efea-kube-api-access-nk8tl\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.326980 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-generated\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.327003 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/692848fd-4cf4-401e-819f-c14ac900efea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.327018 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-default\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.327040 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kolla-config\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.327122 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-default\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.327646 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") device mount path \"/mnt/openstack/pv07\"" pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.328049 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") device mount path \"/mnt/openstack/pv09\"" pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.328049 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-operator-scripts\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.328862 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-config-data-default\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.329063 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-generated\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.329110 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-kolla-config\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.329211 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/692848fd-4cf4-401e-819f-c14ac900efea-config-data-generated\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.329685 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-kolla-config\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.329757 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-default\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.331833 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-operator-scripts\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.355483 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s286v\" (UniqueName: \"kubernetes.io/projected/cb0387ec-3523-41da-b323-7249eb242b4d-kube-api-access-s286v\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.355629 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.372156 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk8tl\" (UniqueName: \"kubernetes.io/projected/692848fd-4cf4-401e-819f-c14ac900efea-kube-api-access-nk8tl\") pod \"openstack-galera-0\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.375128 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-2\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.428220 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.428373 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-operator-scripts\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.428455 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kolla-config\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.428476 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-default\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.428501 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-generated\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.428553 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5jk2\" (UniqueName: \"kubernetes.io/projected/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kube-api-access-r5jk2\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.428857 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") device mount path \"/mnt/openstack/pv11\"" pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.430800 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-generated\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.431052 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kolla-config\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.431765 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-default\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.433227 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-operator-scripts\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.451576 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.457119 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5jk2\" (UniqueName: \"kubernetes.io/projected/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kube-api-access-r5jk2\") pod \"openstack-galera-1\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.526796 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.543837 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.591551 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.688710 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9"] Jan 10 07:01:03 crc kubenswrapper[4810]: I0110 07:01:03.817763 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 10 07:01:03 crc kubenswrapper[4810]: W0110 07:01:03.829408 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod692848fd_4cf4_401e_819f_c14ac900efea.slice/crio-68ec0cfd3704172a795730f303e277df9aa2eff64f2a4d8120960d24b6aca170 WatchSource:0}: Error finding container 68ec0cfd3704172a795730f303e277df9aa2eff64f2a4d8120960d24b6aca170: Status 404 returned error can't find the container with id 68ec0cfd3704172a795730f303e277df9aa2eff64f2a4d8120960d24b6aca170 Jan 10 07:01:04 crc kubenswrapper[4810]: I0110 07:01:04.071940 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 10 07:01:04 crc kubenswrapper[4810]: I0110 07:01:04.164870 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 10 07:01:04 crc kubenswrapper[4810]: W0110 07:01:04.179717 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb0387ec_3523_41da_b323_7249eb242b4d.slice/crio-d6a4940484c8a0697ec180dac285bb1ecfa19df0e9ea8b0a48d20144678c8c48 WatchSource:0}: Error finding container d6a4940484c8a0697ec180dac285bb1ecfa19df0e9ea8b0a48d20144678c8c48: Status 404 returned error can't find the container with id d6a4940484c8a0697ec180dac285bb1ecfa19df0e9ea8b0a48d20144678c8c48 Jan 10 07:01:04 crc kubenswrapper[4810]: I0110 07:01:04.480060 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"06676b15-3e4e-4fa8-bfe6-3d95ad522c31","Type":"ContainerStarted","Data":"4f2cb68a314cac2fa28de279b279a5fe03603771c4f457953e9b214626e040d6"} Jan 10 07:01:04 crc kubenswrapper[4810]: I0110 07:01:04.491659 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" event={"ID":"da111c6b-f079-4b6f-8bab-421f512d92f3","Type":"ContainerStarted","Data":"4140bed8829354887ce3502a5f95228c28362f6b9ada942bfe7fc1dcf4ecf013"} Jan 10 07:01:04 crc kubenswrapper[4810]: I0110 07:01:04.492706 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"692848fd-4cf4-401e-819f-c14ac900efea","Type":"ContainerStarted","Data":"68ec0cfd3704172a795730f303e277df9aa2eff64f2a4d8120960d24b6aca170"} Jan 10 07:01:04 crc kubenswrapper[4810]: I0110 07:01:04.506042 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cb0387ec-3523-41da-b323-7249eb242b4d","Type":"ContainerStarted","Data":"d6a4940484c8a0697ec180dac285bb1ecfa19df0e9ea8b0a48d20144678c8c48"} Jan 10 07:01:07 crc kubenswrapper[4810]: I0110 07:01:07.530676 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" event={"ID":"da111c6b-f079-4b6f-8bab-421f512d92f3","Type":"ContainerStarted","Data":"46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0"} Jan 10 07:01:07 crc kubenswrapper[4810]: I0110 07:01:07.531769 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:07 crc kubenswrapper[4810]: I0110 07:01:07.556307 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" podStartSLOduration=2.2943333790000002 podStartE2EDuration="5.556284415s" podCreationTimestamp="2026-01-10 07:01:02 +0000 UTC" firstStartedPulling="2026-01-10 07:01:03.7053583 +0000 UTC m=+892.320851193" lastFinishedPulling="2026-01-10 07:01:06.967309356 +0000 UTC m=+895.582802229" observedRunningTime="2026-01-10 07:01:07.552389523 +0000 UTC m=+896.167882406" watchObservedRunningTime="2026-01-10 07:01:07.556284415 +0000 UTC m=+896.171777298" Jan 10 07:01:13 crc kubenswrapper[4810]: I0110 07:01:13.237145 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:01:16 crc kubenswrapper[4810]: I0110 07:01:16.486412 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-9nk95"] Jan 10 07:01:16 crc kubenswrapper[4810]: I0110 07:01:16.487867 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" Jan 10 07:01:16 crc kubenswrapper[4810]: I0110 07:01:16.489945 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-l6vlm" Jan 10 07:01:16 crc kubenswrapper[4810]: I0110 07:01:16.499775 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-9nk95"] Jan 10 07:01:16 crc kubenswrapper[4810]: I0110 07:01:16.535837 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvv2b\" (UniqueName: \"kubernetes.io/projected/894d6f2a-7201-41a8-b8b8-0dfa343da98a-kube-api-access-cvv2b\") pod \"rabbitmq-cluster-operator-index-9nk95\" (UID: \"894d6f2a-7201-41a8-b8b8-0dfa343da98a\") " pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" Jan 10 07:01:16 crc kubenswrapper[4810]: I0110 07:01:16.637436 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvv2b\" (UniqueName: \"kubernetes.io/projected/894d6f2a-7201-41a8-b8b8-0dfa343da98a-kube-api-access-cvv2b\") pod \"rabbitmq-cluster-operator-index-9nk95\" (UID: \"894d6f2a-7201-41a8-b8b8-0dfa343da98a\") " pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" Jan 10 07:01:16 crc kubenswrapper[4810]: I0110 07:01:16.662522 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvv2b\" (UniqueName: \"kubernetes.io/projected/894d6f2a-7201-41a8-b8b8-0dfa343da98a-kube-api-access-cvv2b\") pod \"rabbitmq-cluster-operator-index-9nk95\" (UID: \"894d6f2a-7201-41a8-b8b8-0dfa343da98a\") " pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" Jan 10 07:01:16 crc kubenswrapper[4810]: I0110 07:01:16.823018 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.675937 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-9nk95"] Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.852320 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.853209 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.855079 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"memcached-config-data" Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.875445 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.880588 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"memcached-memcached-dockercfg-hgwbh" Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.889591 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6drj\" (UniqueName: \"kubernetes.io/projected/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kube-api-access-z6drj\") pod \"memcached-0\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.889643 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kolla-config\") pod \"memcached-0\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.889671 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-config-data\") pod \"memcached-0\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.990214 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6drj\" (UniqueName: \"kubernetes.io/projected/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kube-api-access-z6drj\") pod \"memcached-0\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.990256 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kolla-config\") pod \"memcached-0\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.990283 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-config-data\") pod \"memcached-0\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.991046 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-config-data\") pod \"memcached-0\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:20 crc kubenswrapper[4810]: I0110 07:01:20.991177 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kolla-config\") pod \"memcached-0\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:21 crc kubenswrapper[4810]: I0110 07:01:21.008119 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6drj\" (UniqueName: \"kubernetes.io/projected/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kube-api-access-z6drj\") pod \"memcached-0\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:21 crc kubenswrapper[4810]: I0110 07:01:21.170715 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:21 crc kubenswrapper[4810]: I0110 07:01:21.278559 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-4ph29"] Jan 10 07:01:21 crc kubenswrapper[4810]: I0110 07:01:21.279254 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" Jan 10 07:01:21 crc kubenswrapper[4810]: I0110 07:01:21.286948 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-4ph29"] Jan 10 07:01:21 crc kubenswrapper[4810]: I0110 07:01:21.394719 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmrzh\" (UniqueName: \"kubernetes.io/projected/c0ce8179-abd0-4cbe-8bf9-9aea18680a9c-kube-api-access-nmrzh\") pod \"rabbitmq-cluster-operator-index-4ph29\" (UID: \"c0ce8179-abd0-4cbe-8bf9-9aea18680a9c\") " pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" Jan 10 07:01:21 crc kubenswrapper[4810]: I0110 07:01:21.495835 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmrzh\" (UniqueName: \"kubernetes.io/projected/c0ce8179-abd0-4cbe-8bf9-9aea18680a9c-kube-api-access-nmrzh\") pod \"rabbitmq-cluster-operator-index-4ph29\" (UID: \"c0ce8179-abd0-4cbe-8bf9-9aea18680a9c\") " pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" Jan 10 07:01:21 crc kubenswrapper[4810]: I0110 07:01:21.514295 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmrzh\" (UniqueName: \"kubernetes.io/projected/c0ce8179-abd0-4cbe-8bf9-9aea18680a9c-kube-api-access-nmrzh\") pod \"rabbitmq-cluster-operator-index-4ph29\" (UID: \"c0ce8179-abd0-4cbe-8bf9-9aea18680a9c\") " pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" Jan 10 07:01:21 crc kubenswrapper[4810]: I0110 07:01:21.591723 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" Jan 10 07:01:23 crc kubenswrapper[4810]: I0110 07:01:23.237319 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-9nk95"] Jan 10 07:01:23 crc kubenswrapper[4810]: I0110 07:01:23.489447 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 10 07:01:23 crc kubenswrapper[4810]: W0110 07:01:23.489657 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b41eafc_06e2_4d4c_8e1c_20220c76be9f.slice/crio-29643a49a8e7ec32a80d03c7b842956f5a293d0c987c47b5317fb79d8ded5c00 WatchSource:0}: Error finding container 29643a49a8e7ec32a80d03c7b842956f5a293d0c987c47b5317fb79d8ded5c00: Status 404 returned error can't find the container with id 29643a49a8e7ec32a80d03c7b842956f5a293d0c987c47b5317fb79d8ded5c00 Jan 10 07:01:23 crc kubenswrapper[4810]: I0110 07:01:23.624781 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-4ph29"] Jan 10 07:01:23 crc kubenswrapper[4810]: W0110 07:01:23.628788 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0ce8179_abd0_4cbe_8bf9_9aea18680a9c.slice/crio-df8ce4ce014f354d5814cd1ee405f66c9d631bcf698dfd4c1b79ebf161fa840b WatchSource:0}: Error finding container df8ce4ce014f354d5814cd1ee405f66c9d631bcf698dfd4c1b79ebf161fa840b: Status 404 returned error can't find the container with id df8ce4ce014f354d5814cd1ee405f66c9d631bcf698dfd4c1b79ebf161fa840b Jan 10 07:01:23 crc kubenswrapper[4810]: I0110 07:01:23.665510 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" event={"ID":"894d6f2a-7201-41a8-b8b8-0dfa343da98a","Type":"ContainerStarted","Data":"294def4246f8643e338d267f5389bd472f8079b7070adae65a0f52d5a5db738f"} Jan 10 07:01:23 crc kubenswrapper[4810]: I0110 07:01:23.666798 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" event={"ID":"c0ce8179-abd0-4cbe-8bf9-9aea18680a9c","Type":"ContainerStarted","Data":"df8ce4ce014f354d5814cd1ee405f66c9d631bcf698dfd4c1b79ebf161fa840b"} Jan 10 07:01:23 crc kubenswrapper[4810]: I0110 07:01:23.669111 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"0b41eafc-06e2-4d4c-8e1c-20220c76be9f","Type":"ContainerStarted","Data":"29643a49a8e7ec32a80d03c7b842956f5a293d0c987c47b5317fb79d8ded5c00"} Jan 10 07:01:23 crc kubenswrapper[4810]: E0110 07:01:23.726948 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 10 07:01:23 crc kubenswrapper[4810]: E0110 07:01:23.727124 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5jk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-1_swift-kuttl-tests(06676b15-3e4e-4fa8-bfe6-3d95ad522c31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 10 07:01:23 crc kubenswrapper[4810]: E0110 07:01:23.728448 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="swift-kuttl-tests/openstack-galera-1" podUID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" Jan 10 07:01:24 crc kubenswrapper[4810]: E0110 07:01:24.254289 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 10 07:01:24 crc kubenswrapper[4810]: E0110 07:01:24.254783 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nk8tl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_swift-kuttl-tests(692848fd-4cf4-401e-819f-c14ac900efea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 10 07:01:24 crc kubenswrapper[4810]: E0110 07:01:24.256095 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="swift-kuttl-tests/openstack-galera-0" podUID="692848fd-4cf4-401e-819f-c14ac900efea" Jan 10 07:01:24 crc kubenswrapper[4810]: E0110 07:01:24.492399 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Jan 10 07:01:24 crc kubenswrapper[4810]: E0110 07:01:24.492677 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s286v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-2_swift-kuttl-tests(cb0387ec-3523-41da-b323-7249eb242b4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 10 07:01:24 crc kubenswrapper[4810]: E0110 07:01:24.494237 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="swift-kuttl-tests/openstack-galera-2" podUID="cb0387ec-3523-41da-b323-7249eb242b4d" Jan 10 07:01:24 crc kubenswrapper[4810]: E0110 07:01:24.680931 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="swift-kuttl-tests/openstack-galera-0" podUID="692848fd-4cf4-401e-819f-c14ac900efea" Jan 10 07:01:24 crc kubenswrapper[4810]: E0110 07:01:24.680994 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="swift-kuttl-tests/openstack-galera-1" podUID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" Jan 10 07:01:24 crc kubenswrapper[4810]: E0110 07:01:24.681078 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="swift-kuttl-tests/openstack-galera-2" podUID="cb0387ec-3523-41da-b323-7249eb242b4d" Jan 10 07:01:29 crc kubenswrapper[4810]: I0110 07:01:29.726726 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" event={"ID":"894d6f2a-7201-41a8-b8b8-0dfa343da98a","Type":"ContainerStarted","Data":"7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a"} Jan 10 07:01:29 crc kubenswrapper[4810]: I0110 07:01:29.726855 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" podUID="894d6f2a-7201-41a8-b8b8-0dfa343da98a" containerName="registry-server" containerID="cri-o://7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a" gracePeriod=2 Jan 10 07:01:29 crc kubenswrapper[4810]: I0110 07:01:29.728609 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" event={"ID":"c0ce8179-abd0-4cbe-8bf9-9aea18680a9c","Type":"ContainerStarted","Data":"080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8"} Jan 10 07:01:29 crc kubenswrapper[4810]: I0110 07:01:29.730471 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"0b41eafc-06e2-4d4c-8e1c-20220c76be9f","Type":"ContainerStarted","Data":"c11918850f7f4d79b51c135bff1cc2801d4bd89a1614e8409dacafb464114839"} Jan 10 07:01:29 crc kubenswrapper[4810]: I0110 07:01:29.730646 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:29 crc kubenswrapper[4810]: I0110 07:01:29.746264 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" podStartSLOduration=7.623723264 podStartE2EDuration="13.74624958s" podCreationTimestamp="2026-01-10 07:01:16 +0000 UTC" firstStartedPulling="2026-01-10 07:01:23.257489646 +0000 UTC m=+911.872982529" lastFinishedPulling="2026-01-10 07:01:29.380015922 +0000 UTC m=+917.995508845" observedRunningTime="2026-01-10 07:01:29.744471858 +0000 UTC m=+918.359964761" watchObservedRunningTime="2026-01-10 07:01:29.74624958 +0000 UTC m=+918.361742453" Jan 10 07:01:29 crc kubenswrapper[4810]: I0110 07:01:29.764245 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/memcached-0" podStartSLOduration=5.845557075 podStartE2EDuration="9.764179106s" podCreationTimestamp="2026-01-10 07:01:20 +0000 UTC" firstStartedPulling="2026-01-10 07:01:23.49220708 +0000 UTC m=+912.107699963" lastFinishedPulling="2026-01-10 07:01:27.410829121 +0000 UTC m=+916.026321994" observedRunningTime="2026-01-10 07:01:29.762321362 +0000 UTC m=+918.377814265" watchObservedRunningTime="2026-01-10 07:01:29.764179106 +0000 UTC m=+918.379671989" Jan 10 07:01:29 crc kubenswrapper[4810]: I0110 07:01:29.778371 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" podStartSLOduration=2.991524819 podStartE2EDuration="8.778351812s" podCreationTimestamp="2026-01-10 07:01:21 +0000 UTC" firstStartedPulling="2026-01-10 07:01:23.633740772 +0000 UTC m=+912.249233655" lastFinishedPulling="2026-01-10 07:01:29.420567765 +0000 UTC m=+918.036060648" observedRunningTime="2026-01-10 07:01:29.774404618 +0000 UTC m=+918.389897501" watchObservedRunningTime="2026-01-10 07:01:29.778351812 +0000 UTC m=+918.393844695" Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.101734 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.164469 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvv2b\" (UniqueName: \"kubernetes.io/projected/894d6f2a-7201-41a8-b8b8-0dfa343da98a-kube-api-access-cvv2b\") pod \"894d6f2a-7201-41a8-b8b8-0dfa343da98a\" (UID: \"894d6f2a-7201-41a8-b8b8-0dfa343da98a\") " Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.174307 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/894d6f2a-7201-41a8-b8b8-0dfa343da98a-kube-api-access-cvv2b" (OuterVolumeSpecName: "kube-api-access-cvv2b") pod "894d6f2a-7201-41a8-b8b8-0dfa343da98a" (UID: "894d6f2a-7201-41a8-b8b8-0dfa343da98a"). InnerVolumeSpecName "kube-api-access-cvv2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.266536 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvv2b\" (UniqueName: \"kubernetes.io/projected/894d6f2a-7201-41a8-b8b8-0dfa343da98a-kube-api-access-cvv2b\") on node \"crc\" DevicePath \"\"" Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.739640 4810 generic.go:334] "Generic (PLEG): container finished" podID="894d6f2a-7201-41a8-b8b8-0dfa343da98a" containerID="7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a" exitCode=0 Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.739701 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" event={"ID":"894d6f2a-7201-41a8-b8b8-0dfa343da98a","Type":"ContainerDied","Data":"7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a"} Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.740102 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" event={"ID":"894d6f2a-7201-41a8-b8b8-0dfa343da98a","Type":"ContainerDied","Data":"294def4246f8643e338d267f5389bd472f8079b7070adae65a0f52d5a5db738f"} Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.739755 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-9nk95" Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.740171 4810 scope.go:117] "RemoveContainer" containerID="7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a" Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.761655 4810 scope.go:117] "RemoveContainer" containerID="7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a" Jan 10 07:01:30 crc kubenswrapper[4810]: E0110 07:01:30.762237 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a\": container with ID starting with 7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a not found: ID does not exist" containerID="7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a" Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.762283 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a"} err="failed to get container status \"7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a\": rpc error: code = NotFound desc = could not find container \"7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a\": container with ID starting with 7c39b0fb6122beb28cb6301cde8e4e93d5400e2e6feef60071682cd30e3d505a not found: ID does not exist" Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.786345 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-9nk95"] Jan 10 07:01:30 crc kubenswrapper[4810]: I0110 07:01:30.793119 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-9nk95"] Jan 10 07:01:31 crc kubenswrapper[4810]: I0110 07:01:31.592619 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" Jan 10 07:01:31 crc kubenswrapper[4810]: I0110 07:01:31.592704 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" Jan 10 07:01:31 crc kubenswrapper[4810]: I0110 07:01:31.631208 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" Jan 10 07:01:31 crc kubenswrapper[4810]: I0110 07:01:31.703448 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="894d6f2a-7201-41a8-b8b8-0dfa343da98a" path="/var/lib/kubelet/pods/894d6f2a-7201-41a8-b8b8-0dfa343da98a/volumes" Jan 10 07:01:36 crc kubenswrapper[4810]: I0110 07:01:36.172808 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/memcached-0" Jan 10 07:01:37 crc kubenswrapper[4810]: I0110 07:01:37.787862 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"692848fd-4cf4-401e-819f-c14ac900efea","Type":"ContainerStarted","Data":"9397b9766e68dbdccc09827f2dde91e3737ff25e5ffef0796c0e6c0cda613ff3"} Jan 10 07:01:37 crc kubenswrapper[4810]: I0110 07:01:37.789611 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cb0387ec-3523-41da-b323-7249eb242b4d","Type":"ContainerStarted","Data":"0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525"} Jan 10 07:01:37 crc kubenswrapper[4810]: I0110 07:01:37.790565 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"06676b15-3e4e-4fa8-bfe6-3d95ad522c31","Type":"ContainerStarted","Data":"792b3672925092a2559afb24530b01d1b2ceb48f24a35df7ea59a6d96ddbc8ae"} Jan 10 07:01:41 crc kubenswrapper[4810]: I0110 07:01:41.622775 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" Jan 10 07:01:41 crc kubenswrapper[4810]: I0110 07:01:41.819189 4810 generic.go:334] "Generic (PLEG): container finished" podID="692848fd-4cf4-401e-819f-c14ac900efea" containerID="9397b9766e68dbdccc09827f2dde91e3737ff25e5ffef0796c0e6c0cda613ff3" exitCode=0 Jan 10 07:01:41 crc kubenswrapper[4810]: I0110 07:01:41.819293 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"692848fd-4cf4-401e-819f-c14ac900efea","Type":"ContainerDied","Data":"9397b9766e68dbdccc09827f2dde91e3737ff25e5ffef0796c0e6c0cda613ff3"} Jan 10 07:01:41 crc kubenswrapper[4810]: I0110 07:01:41.821605 4810 generic.go:334] "Generic (PLEG): container finished" podID="cb0387ec-3523-41da-b323-7249eb242b4d" containerID="0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525" exitCode=0 Jan 10 07:01:41 crc kubenswrapper[4810]: I0110 07:01:41.821697 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cb0387ec-3523-41da-b323-7249eb242b4d","Type":"ContainerDied","Data":"0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525"} Jan 10 07:01:41 crc kubenswrapper[4810]: I0110 07:01:41.824244 4810 generic.go:334] "Generic (PLEG): container finished" podID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" containerID="792b3672925092a2559afb24530b01d1b2ceb48f24a35df7ea59a6d96ddbc8ae" exitCode=0 Jan 10 07:01:41 crc kubenswrapper[4810]: I0110 07:01:41.824299 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"06676b15-3e4e-4fa8-bfe6-3d95ad522c31","Type":"ContainerDied","Data":"792b3672925092a2559afb24530b01d1b2ceb48f24a35df7ea59a6d96ddbc8ae"} Jan 10 07:01:42 crc kubenswrapper[4810]: I0110 07:01:42.833991 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"06676b15-3e4e-4fa8-bfe6-3d95ad522c31","Type":"ContainerStarted","Data":"aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb"} Jan 10 07:01:42 crc kubenswrapper[4810]: I0110 07:01:42.837973 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"692848fd-4cf4-401e-819f-c14ac900efea","Type":"ContainerStarted","Data":"bf7e22e268f1c6b6474a96ff020a3cf3ca568fcb6c44c8c82772fc80668447ff"} Jan 10 07:01:42 crc kubenswrapper[4810]: I0110 07:01:42.840482 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cb0387ec-3523-41da-b323-7249eb242b4d","Type":"ContainerStarted","Data":"67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262"} Jan 10 07:01:42 crc kubenswrapper[4810]: I0110 07:01:42.855814 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-1" podStartSLOduration=7.720588447 podStartE2EDuration="40.855795854s" podCreationTimestamp="2026-01-10 07:01:02 +0000 UTC" firstStartedPulling="2026-01-10 07:01:04.075126032 +0000 UTC m=+892.690618925" lastFinishedPulling="2026-01-10 07:01:37.210333409 +0000 UTC m=+925.825826332" observedRunningTime="2026-01-10 07:01:42.852629309 +0000 UTC m=+931.468122202" watchObservedRunningTime="2026-01-10 07:01:42.855795854 +0000 UTC m=+931.471288737" Jan 10 07:01:42 crc kubenswrapper[4810]: I0110 07:01:42.873382 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-0" podStartSLOduration=7.495031991 podStartE2EDuration="40.873355701s" podCreationTimestamp="2026-01-10 07:01:02 +0000 UTC" firstStartedPulling="2026-01-10 07:01:03.833638087 +0000 UTC m=+892.449130970" lastFinishedPulling="2026-01-10 07:01:37.211961767 +0000 UTC m=+925.827454680" observedRunningTime="2026-01-10 07:01:42.870526394 +0000 UTC m=+931.486019327" watchObservedRunningTime="2026-01-10 07:01:42.873355701 +0000 UTC m=+931.488848604" Jan 10 07:01:42 crc kubenswrapper[4810]: I0110 07:01:42.893017 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/openstack-galera-2" podStartSLOduration=7.723456517 podStartE2EDuration="40.892991038s" podCreationTimestamp="2026-01-10 07:01:02 +0000 UTC" firstStartedPulling="2026-01-10 07:01:04.18490494 +0000 UTC m=+892.800397843" lastFinishedPulling="2026-01-10 07:01:37.354439481 +0000 UTC m=+925.969932364" observedRunningTime="2026-01-10 07:01:42.889619737 +0000 UTC m=+931.505112630" watchObservedRunningTime="2026-01-10 07:01:42.892991038 +0000 UTC m=+931.508483941" Jan 10 07:01:43 crc kubenswrapper[4810]: I0110 07:01:43.527559 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:43 crc kubenswrapper[4810]: I0110 07:01:43.527641 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:01:43 crc kubenswrapper[4810]: I0110 07:01:43.544712 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:43 crc kubenswrapper[4810]: I0110 07:01:43.544761 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:01:43 crc kubenswrapper[4810]: I0110 07:01:43.592704 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:43 crc kubenswrapper[4810]: I0110 07:01:43.592939 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:46 crc kubenswrapper[4810]: E0110 07:01:46.251571 4810 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.9:35544->38.102.83.9:38265: write tcp 38.102.83.9:35544->38.102.83.9:38265: write: broken pipe Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.710690 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv"] Jan 10 07:01:48 crc kubenswrapper[4810]: E0110 07:01:48.711135 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="894d6f2a-7201-41a8-b8b8-0dfa343da98a" containerName="registry-server" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.711146 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="894d6f2a-7201-41a8-b8b8-0dfa343da98a" containerName="registry-server" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.711283 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="894d6f2a-7201-41a8-b8b8-0dfa343da98a" containerName="registry-server" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.712033 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.713500 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-8bf9r" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.722378 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv"] Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.826466 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.826505 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6czf\" (UniqueName: \"kubernetes.io/projected/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-kube-api-access-r6czf\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.826583 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.928275 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.928556 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6czf\" (UniqueName: \"kubernetes.io/projected/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-kube-api-access-r6czf\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.928706 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.929448 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.929751 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:48 crc kubenswrapper[4810]: I0110 07:01:48.951684 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6czf\" (UniqueName: \"kubernetes.io/projected/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-kube-api-access-r6czf\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:49 crc kubenswrapper[4810]: I0110 07:01:49.081939 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:49 crc kubenswrapper[4810]: I0110 07:01:49.504459 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv"] Jan 10 07:01:49 crc kubenswrapper[4810]: I0110 07:01:49.890994 4810 generic.go:334] "Generic (PLEG): container finished" podID="5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" containerID="c2caa4d5df530493c280979753936e43a9147f0a88334355300e92c21d249e68" exitCode=0 Jan 10 07:01:49 crc kubenswrapper[4810]: I0110 07:01:49.891059 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" event={"ID":"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc","Type":"ContainerDied","Data":"c2caa4d5df530493c280979753936e43a9147f0a88334355300e92c21d249e68"} Jan 10 07:01:49 crc kubenswrapper[4810]: I0110 07:01:49.891099 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" event={"ID":"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc","Type":"ContainerStarted","Data":"59f8ed9eea5cf7493d664a08f5d067f3fc1e65cbab330b35b74ccc82264c7f8f"} Jan 10 07:01:51 crc kubenswrapper[4810]: I0110 07:01:51.715430 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:51 crc kubenswrapper[4810]: I0110 07:01:51.802822 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:01:51 crc kubenswrapper[4810]: I0110 07:01:51.904797 4810 generic.go:334] "Generic (PLEG): container finished" podID="5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" containerID="9b3883789dce067f2f463848a5772f35118a8e78db77f8c090501bae5c9fcb89" exitCode=0 Jan 10 07:01:51 crc kubenswrapper[4810]: I0110 07:01:51.905879 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" event={"ID":"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc","Type":"ContainerDied","Data":"9b3883789dce067f2f463848a5772f35118a8e78db77f8c090501bae5c9fcb89"} Jan 10 07:01:52 crc kubenswrapper[4810]: I0110 07:01:52.924513 4810 generic.go:334] "Generic (PLEG): container finished" podID="5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" containerID="bf091f6245bcbc9fb4c30afd42548815f3e0338d7def9e77b2768c1ee3ecdd49" exitCode=0 Jan 10 07:01:52 crc kubenswrapper[4810]: I0110 07:01:52.924670 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" event={"ID":"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc","Type":"ContainerDied","Data":"bf091f6245bcbc9fb4c30afd42548815f3e0338d7def9e77b2768c1ee3ecdd49"} Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.294767 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.406422 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-bundle\") pod \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.406529 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-util\") pod \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.406629 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6czf\" (UniqueName: \"kubernetes.io/projected/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-kube-api-access-r6czf\") pod \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\" (UID: \"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc\") " Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.407282 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-bundle" (OuterVolumeSpecName: "bundle") pod "5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" (UID: "5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.419250 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-util" (OuterVolumeSpecName: "util") pod "5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" (UID: "5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.422074 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-kube-api-access-r6czf" (OuterVolumeSpecName: "kube-api-access-r6czf") pod "5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" (UID: "5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc"). InnerVolumeSpecName "kube-api-access-r6czf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.508267 4810 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-util\") on node \"crc\" DevicePath \"\"" Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.508326 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6czf\" (UniqueName: \"kubernetes.io/projected/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-kube-api-access-r6czf\") on node \"crc\" DevicePath \"\"" Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.508350 4810 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.943884 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" event={"ID":"5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc","Type":"ContainerDied","Data":"59f8ed9eea5cf7493d664a08f5d067f3fc1e65cbab330b35b74ccc82264c7f8f"} Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.943942 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59f8ed9eea5cf7493d664a08f5d067f3fc1e65cbab330b35b74ccc82264c7f8f" Jan 10 07:01:54 crc kubenswrapper[4810]: I0110 07:01:54.944308 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.212048 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/root-account-create-update-gdxlc"] Jan 10 07:02:02 crc kubenswrapper[4810]: E0110 07:02:02.212885 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" containerName="pull" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.212900 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" containerName="pull" Jan 10 07:02:02 crc kubenswrapper[4810]: E0110 07:02:02.212921 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" containerName="extract" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.212929 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" containerName="extract" Jan 10 07:02:02 crc kubenswrapper[4810]: E0110 07:02:02.212941 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" containerName="util" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.212949 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" containerName="util" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.213071 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" containerName="extract" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.213558 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gdxlc" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.215392 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.223579 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gdxlc"] Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.326733 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0b1f3d-7fc9-4771-81c5-6722593deb1e-operator-scripts\") pod \"root-account-create-update-gdxlc\" (UID: \"de0b1f3d-7fc9-4771-81c5-6722593deb1e\") " pod="swift-kuttl-tests/root-account-create-update-gdxlc" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.326831 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s92fg\" (UniqueName: \"kubernetes.io/projected/de0b1f3d-7fc9-4771-81c5-6722593deb1e-kube-api-access-s92fg\") pod \"root-account-create-update-gdxlc\" (UID: \"de0b1f3d-7fc9-4771-81c5-6722593deb1e\") " pod="swift-kuttl-tests/root-account-create-update-gdxlc" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.428582 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0b1f3d-7fc9-4771-81c5-6722593deb1e-operator-scripts\") pod \"root-account-create-update-gdxlc\" (UID: \"de0b1f3d-7fc9-4771-81c5-6722593deb1e\") " pod="swift-kuttl-tests/root-account-create-update-gdxlc" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.428663 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s92fg\" (UniqueName: \"kubernetes.io/projected/de0b1f3d-7fc9-4771-81c5-6722593deb1e-kube-api-access-s92fg\") pod \"root-account-create-update-gdxlc\" (UID: \"de0b1f3d-7fc9-4771-81c5-6722593deb1e\") " pod="swift-kuttl-tests/root-account-create-update-gdxlc" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.429616 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0b1f3d-7fc9-4771-81c5-6722593deb1e-operator-scripts\") pod \"root-account-create-update-gdxlc\" (UID: \"de0b1f3d-7fc9-4771-81c5-6722593deb1e\") " pod="swift-kuttl-tests/root-account-create-update-gdxlc" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.450782 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s92fg\" (UniqueName: \"kubernetes.io/projected/de0b1f3d-7fc9-4771-81c5-6722593deb1e-kube-api-access-s92fg\") pod \"root-account-create-update-gdxlc\" (UID: \"de0b1f3d-7fc9-4771-81c5-6722593deb1e\") " pod="swift-kuttl-tests/root-account-create-update-gdxlc" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.528264 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gdxlc" Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.804242 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gdxlc"] Jan 10 07:02:02 crc kubenswrapper[4810]: I0110 07:02:02.995828 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gdxlc" event={"ID":"de0b1f3d-7fc9-4771-81c5-6722593deb1e","Type":"ContainerStarted","Data":"19fcadfefa2ca5334efbe986973bc08c61c68d26e0580364a2cd2901a909dbc8"} Jan 10 07:02:03 crc kubenswrapper[4810]: I0110 07:02:03.653611 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/openstack-galera-2" podUID="cb0387ec-3523-41da-b323-7249eb242b4d" containerName="galera" probeResult="failure" output=< Jan 10 07:02:03 crc kubenswrapper[4810]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 10 07:02:03 crc kubenswrapper[4810]: > Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.003117 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gdxlc" event={"ID":"de0b1f3d-7fc9-4771-81c5-6722593deb1e","Type":"ContainerStarted","Data":"c2be034fc72f171ac8b87be0c14eac435525efdd4eeb007c49e180965967ab71"} Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.016422 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/root-account-create-update-gdxlc" podStartSLOduration=2.01640114 podStartE2EDuration="2.01640114s" podCreationTimestamp="2026-01-10 07:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:02:04.013236985 +0000 UTC m=+952.628729868" watchObservedRunningTime="2026-01-10 07:02:04.01640114 +0000 UTC m=+952.631894023" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.147519 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.220511 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.686144 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ktdfq"] Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.687490 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.703548 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ktdfq"] Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.760008 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-catalog-content\") pod \"community-operators-ktdfq\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.760077 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-utilities\") pod \"community-operators-ktdfq\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.760142 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd85x\" (UniqueName: \"kubernetes.io/projected/ae59625d-9860-4641-b2e8-925ff530829a-kube-api-access-rd85x\") pod \"community-operators-ktdfq\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.861161 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-catalog-content\") pod \"community-operators-ktdfq\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.861239 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-utilities\") pod \"community-operators-ktdfq\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.861286 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd85x\" (UniqueName: \"kubernetes.io/projected/ae59625d-9860-4641-b2e8-925ff530829a-kube-api-access-rd85x\") pod \"community-operators-ktdfq\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.861873 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-catalog-content\") pod \"community-operators-ktdfq\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.862227 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-utilities\") pod \"community-operators-ktdfq\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:04 crc kubenswrapper[4810]: I0110 07:02:04.888652 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd85x\" (UniqueName: \"kubernetes.io/projected/ae59625d-9860-4641-b2e8-925ff530829a-kube-api-access-rd85x\") pod \"community-operators-ktdfq\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:05 crc kubenswrapper[4810]: I0110 07:02:05.002383 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:05 crc kubenswrapper[4810]: I0110 07:02:05.011948 4810 generic.go:334] "Generic (PLEG): container finished" podID="de0b1f3d-7fc9-4771-81c5-6722593deb1e" containerID="c2be034fc72f171ac8b87be0c14eac435525efdd4eeb007c49e180965967ab71" exitCode=0 Jan 10 07:02:05 crc kubenswrapper[4810]: I0110 07:02:05.012042 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gdxlc" event={"ID":"de0b1f3d-7fc9-4771-81c5-6722593deb1e","Type":"ContainerDied","Data":"c2be034fc72f171ac8b87be0c14eac435525efdd4eeb007c49e180965967ab71"} Jan 10 07:02:05 crc kubenswrapper[4810]: I0110 07:02:05.332333 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ktdfq"] Jan 10 07:02:05 crc kubenswrapper[4810]: W0110 07:02:05.339799 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae59625d_9860_4641_b2e8_925ff530829a.slice/crio-790b9ab5d6d7fd856217460da7c04a3a7687a7843a9fb3818fcef6d0b9638ae7 WatchSource:0}: Error finding container 790b9ab5d6d7fd856217460da7c04a3a7687a7843a9fb3818fcef6d0b9638ae7: Status 404 returned error can't find the container with id 790b9ab5d6d7fd856217460da7c04a3a7687a7843a9fb3818fcef6d0b9638ae7 Jan 10 07:02:06 crc kubenswrapper[4810]: I0110 07:02:06.022157 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktdfq" event={"ID":"ae59625d-9860-4641-b2e8-925ff530829a","Type":"ContainerStarted","Data":"17f13276b7b8d9c345a1d708eda1f68910ab62d6b8327042c20031c1dff2762e"} Jan 10 07:02:06 crc kubenswrapper[4810]: I0110 07:02:06.024016 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktdfq" event={"ID":"ae59625d-9860-4641-b2e8-925ff530829a","Type":"ContainerStarted","Data":"790b9ab5d6d7fd856217460da7c04a3a7687a7843a9fb3818fcef6d0b9638ae7"} Jan 10 07:02:06 crc kubenswrapper[4810]: I0110 07:02:06.336736 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gdxlc" Jan 10 07:02:06 crc kubenswrapper[4810]: I0110 07:02:06.492406 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s92fg\" (UniqueName: \"kubernetes.io/projected/de0b1f3d-7fc9-4771-81c5-6722593deb1e-kube-api-access-s92fg\") pod \"de0b1f3d-7fc9-4771-81c5-6722593deb1e\" (UID: \"de0b1f3d-7fc9-4771-81c5-6722593deb1e\") " Jan 10 07:02:06 crc kubenswrapper[4810]: I0110 07:02:06.492558 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0b1f3d-7fc9-4771-81c5-6722593deb1e-operator-scripts\") pod \"de0b1f3d-7fc9-4771-81c5-6722593deb1e\" (UID: \"de0b1f3d-7fc9-4771-81c5-6722593deb1e\") " Jan 10 07:02:06 crc kubenswrapper[4810]: I0110 07:02:06.493243 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0b1f3d-7fc9-4771-81c5-6722593deb1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de0b1f3d-7fc9-4771-81c5-6722593deb1e" (UID: "de0b1f3d-7fc9-4771-81c5-6722593deb1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:02:06 crc kubenswrapper[4810]: I0110 07:02:06.508928 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0b1f3d-7fc9-4771-81c5-6722593deb1e-kube-api-access-s92fg" (OuterVolumeSpecName: "kube-api-access-s92fg") pod "de0b1f3d-7fc9-4771-81c5-6722593deb1e" (UID: "de0b1f3d-7fc9-4771-81c5-6722593deb1e"). InnerVolumeSpecName "kube-api-access-s92fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:02:06 crc kubenswrapper[4810]: I0110 07:02:06.594524 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s92fg\" (UniqueName: \"kubernetes.io/projected/de0b1f3d-7fc9-4771-81c5-6722593deb1e-kube-api-access-s92fg\") on node \"crc\" DevicePath \"\"" Jan 10 07:02:06 crc kubenswrapper[4810]: I0110 07:02:06.594557 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de0b1f3d-7fc9-4771-81c5-6722593deb1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:02:06 crc kubenswrapper[4810]: I0110 07:02:06.773508 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:02:06 crc kubenswrapper[4810]: I0110 07:02:06.859152 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.029740 4810 generic.go:334] "Generic (PLEG): container finished" podID="ae59625d-9860-4641-b2e8-925ff530829a" containerID="17f13276b7b8d9c345a1d708eda1f68910ab62d6b8327042c20031c1dff2762e" exitCode=0 Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.029831 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktdfq" event={"ID":"ae59625d-9860-4641-b2e8-925ff530829a","Type":"ContainerDied","Data":"17f13276b7b8d9c345a1d708eda1f68910ab62d6b8327042c20031c1dff2762e"} Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.033356 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-gdxlc" Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.033861 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/root-account-create-update-gdxlc" event={"ID":"de0b1f3d-7fc9-4771-81c5-6722593deb1e","Type":"ContainerDied","Data":"19fcadfefa2ca5334efbe986973bc08c61c68d26e0580364a2cd2901a909dbc8"} Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.033898 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19fcadfefa2ca5334efbe986973bc08c61c68d26e0580364a2cd2901a909dbc8" Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.748518 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd"] Jan 10 07:02:07 crc kubenswrapper[4810]: E0110 07:02:07.749075 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0b1f3d-7fc9-4771-81c5-6722593deb1e" containerName="mariadb-account-create-update" Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.749097 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0b1f3d-7fc9-4771-81c5-6722593deb1e" containerName="mariadb-account-create-update" Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.749257 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0b1f3d-7fc9-4771-81c5-6722593deb1e" containerName="mariadb-account-create-update" Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.749763 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.755679 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-7www2" Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.767104 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd"] Jan 10 07:02:07 crc kubenswrapper[4810]: I0110 07:02:07.913352 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrcxz\" (UniqueName: \"kubernetes.io/projected/5f6d9ded-28c8-45af-a726-a1165a822d3e-kube-api-access-wrcxz\") pod \"rabbitmq-cluster-operator-779fc9694b-kz7fd\" (UID: \"5f6d9ded-28c8-45af-a726-a1165a822d3e\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" Jan 10 07:02:08 crc kubenswrapper[4810]: I0110 07:02:08.014402 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrcxz\" (UniqueName: \"kubernetes.io/projected/5f6d9ded-28c8-45af-a726-a1165a822d3e-kube-api-access-wrcxz\") pod \"rabbitmq-cluster-operator-779fc9694b-kz7fd\" (UID: \"5f6d9ded-28c8-45af-a726-a1165a822d3e\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" Jan 10 07:02:08 crc kubenswrapper[4810]: I0110 07:02:08.037692 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrcxz\" (UniqueName: \"kubernetes.io/projected/5f6d9ded-28c8-45af-a726-a1165a822d3e-kube-api-access-wrcxz\") pod \"rabbitmq-cluster-operator-779fc9694b-kz7fd\" (UID: \"5f6d9ded-28c8-45af-a726-a1165a822d3e\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" Jan 10 07:02:08 crc kubenswrapper[4810]: I0110 07:02:08.040829 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktdfq" event={"ID":"ae59625d-9860-4641-b2e8-925ff530829a","Type":"ContainerStarted","Data":"b21dc7274783719294f252647fc308afd95cc415ea1ea8acd65541f55d5a5d3f"} Jan 10 07:02:08 crc kubenswrapper[4810]: I0110 07:02:08.067286 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" Jan 10 07:02:08 crc kubenswrapper[4810]: I0110 07:02:08.450235 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd"] Jan 10 07:02:09 crc kubenswrapper[4810]: I0110 07:02:09.048619 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" event={"ID":"5f6d9ded-28c8-45af-a726-a1165a822d3e","Type":"ContainerStarted","Data":"f0a23bcf314611c4d05403b720e9314386cbaa056bf940a2f7979424402ecfa1"} Jan 10 07:02:09 crc kubenswrapper[4810]: I0110 07:02:09.052019 4810 generic.go:334] "Generic (PLEG): container finished" podID="ae59625d-9860-4641-b2e8-925ff530829a" containerID="b21dc7274783719294f252647fc308afd95cc415ea1ea8acd65541f55d5a5d3f" exitCode=0 Jan 10 07:02:09 crc kubenswrapper[4810]: I0110 07:02:09.052070 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktdfq" event={"ID":"ae59625d-9860-4641-b2e8-925ff530829a","Type":"ContainerDied","Data":"b21dc7274783719294f252647fc308afd95cc415ea1ea8acd65541f55d5a5d3f"} Jan 10 07:02:10 crc kubenswrapper[4810]: I0110 07:02:10.063492 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktdfq" event={"ID":"ae59625d-9860-4641-b2e8-925ff530829a","Type":"ContainerStarted","Data":"347b4a485f876e5479e09228d2764d661598e19d09200d7c8095d1f2a3a83ac1"} Jan 10 07:02:10 crc kubenswrapper[4810]: I0110 07:02:10.088271 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ktdfq" podStartSLOduration=3.600077574 podStartE2EDuration="6.088240942s" podCreationTimestamp="2026-01-10 07:02:04 +0000 UTC" firstStartedPulling="2026-01-10 07:02:07.032400602 +0000 UTC m=+955.647893495" lastFinishedPulling="2026-01-10 07:02:09.52056398 +0000 UTC m=+958.136056863" observedRunningTime="2026-01-10 07:02:10.083079769 +0000 UTC m=+958.698572652" watchObservedRunningTime="2026-01-10 07:02:10.088240942 +0000 UTC m=+958.703733825" Jan 10 07:02:12 crc kubenswrapper[4810]: I0110 07:02:12.081150 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" event={"ID":"5f6d9ded-28c8-45af-a726-a1165a822d3e","Type":"ContainerStarted","Data":"cda67367bc1ec5c505976945c28294226968305fa1eae74b7f512c0345895c06"} Jan 10 07:02:12 crc kubenswrapper[4810]: I0110 07:02:12.109809 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" podStartSLOduration=1.925947157 podStartE2EDuration="5.109785706s" podCreationTimestamp="2026-01-10 07:02:07 +0000 UTC" firstStartedPulling="2026-01-10 07:02:08.463222706 +0000 UTC m=+957.078715589" lastFinishedPulling="2026-01-10 07:02:11.647061255 +0000 UTC m=+960.262554138" observedRunningTime="2026-01-10 07:02:12.101567871 +0000 UTC m=+960.717060784" watchObservedRunningTime="2026-01-10 07:02:12.109785706 +0000 UTC m=+960.725278619" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.399159 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.400718 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.403913 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-default-user" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.404021 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-plugins-conf" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.404098 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"rabbitmq-server-conf" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.404513 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-server-dockercfg-wtqjw" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.405781 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.427116 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.513042 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.513121 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.513163 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.513252 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.513286 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.513327 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.513360 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.513502 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqqdz\" (UniqueName: \"kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-kube-api-access-jqqdz\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.615122 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.615253 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.615335 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.615380 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.615437 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.615477 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.615604 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqqdz\" (UniqueName: \"kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-kube-api-access-jqqdz\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.615774 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.617901 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.618269 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.618556 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.629073 4810 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.629117 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/22dd6950e77b733270e16e048a43fde5d3e791dfffcfd27361ea8840472d7613/globalmount\"" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.629118 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.641801 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.646276 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.699768 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqqdz\" (UniqueName: \"kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-kube-api-access-jqqdz\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:14 crc kubenswrapper[4810]: I0110 07:02:14.791625 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\") pod \"rabbitmq-server-0\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:15 crc kubenswrapper[4810]: I0110 07:02:15.002594 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:15 crc kubenswrapper[4810]: I0110 07:02:15.002945 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:15 crc kubenswrapper[4810]: I0110 07:02:15.027017 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:02:15 crc kubenswrapper[4810]: I0110 07:02:15.078494 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:15 crc kubenswrapper[4810]: I0110 07:02:15.196821 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:15 crc kubenswrapper[4810]: I0110 07:02:15.524123 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 10 07:02:15 crc kubenswrapper[4810]: W0110 07:02:15.530917 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c176cb_a9d8_49ed_9d35_78a2975d9dd6.slice/crio-a5028fcbbe63ce9c6c73b201b7c92c92f98c9dffabf4d95161f2b87b7885b155 WatchSource:0}: Error finding container a5028fcbbe63ce9c6c73b201b7c92c92f98c9dffabf4d95161f2b87b7885b155: Status 404 returned error can't find the container with id a5028fcbbe63ce9c6c73b201b7c92c92f98c9dffabf4d95161f2b87b7885b155 Jan 10 07:02:16 crc kubenswrapper[4810]: I0110 07:02:16.118620 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"13c176cb-a9d8-49ed-9d35-78a2975d9dd6","Type":"ContainerStarted","Data":"a5028fcbbe63ce9c6c73b201b7c92c92f98c9dffabf4d95161f2b87b7885b155"} Jan 10 07:02:16 crc kubenswrapper[4810]: I0110 07:02:16.479948 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ktdfq"] Jan 10 07:02:17 crc kubenswrapper[4810]: I0110 07:02:17.087967 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-tp9xq"] Jan 10 07:02:17 crc kubenswrapper[4810]: I0110 07:02:17.090272 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-tp9xq" Jan 10 07:02:17 crc kubenswrapper[4810]: I0110 07:02:17.093074 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-4zdcb" Jan 10 07:02:17 crc kubenswrapper[4810]: I0110 07:02:17.113726 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-tp9xq"] Jan 10 07:02:17 crc kubenswrapper[4810]: I0110 07:02:17.154309 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2stw2\" (UniqueName: \"kubernetes.io/projected/31586912-6862-4b77-892d-76af90a5b22a-kube-api-access-2stw2\") pod \"keystone-operator-index-tp9xq\" (UID: \"31586912-6862-4b77-892d-76af90a5b22a\") " pod="openstack-operators/keystone-operator-index-tp9xq" Jan 10 07:02:17 crc kubenswrapper[4810]: I0110 07:02:17.257794 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2stw2\" (UniqueName: \"kubernetes.io/projected/31586912-6862-4b77-892d-76af90a5b22a-kube-api-access-2stw2\") pod \"keystone-operator-index-tp9xq\" (UID: \"31586912-6862-4b77-892d-76af90a5b22a\") " pod="openstack-operators/keystone-operator-index-tp9xq" Jan 10 07:02:17 crc kubenswrapper[4810]: I0110 07:02:17.291864 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2stw2\" (UniqueName: \"kubernetes.io/projected/31586912-6862-4b77-892d-76af90a5b22a-kube-api-access-2stw2\") pod \"keystone-operator-index-tp9xq\" (UID: \"31586912-6862-4b77-892d-76af90a5b22a\") " pod="openstack-operators/keystone-operator-index-tp9xq" Jan 10 07:02:17 crc kubenswrapper[4810]: I0110 07:02:17.413670 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-tp9xq" Jan 10 07:02:17 crc kubenswrapper[4810]: I0110 07:02:17.871726 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-tp9xq"] Jan 10 07:02:17 crc kubenswrapper[4810]: W0110 07:02:17.894367 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31586912_6862_4b77_892d_76af90a5b22a.slice/crio-81c17ca9f21361c23dc8706fd494031df33b3e84cb6ae036990849e9151a89e7 WatchSource:0}: Error finding container 81c17ca9f21361c23dc8706fd494031df33b3e84cb6ae036990849e9151a89e7: Status 404 returned error can't find the container with id 81c17ca9f21361c23dc8706fd494031df33b3e84cb6ae036990849e9151a89e7 Jan 10 07:02:18 crc kubenswrapper[4810]: I0110 07:02:18.139650 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-tp9xq" event={"ID":"31586912-6862-4b77-892d-76af90a5b22a","Type":"ContainerStarted","Data":"81c17ca9f21361c23dc8706fd494031df33b3e84cb6ae036990849e9151a89e7"} Jan 10 07:02:18 crc kubenswrapper[4810]: I0110 07:02:18.139816 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ktdfq" podUID="ae59625d-9860-4641-b2e8-925ff530829a" containerName="registry-server" containerID="cri-o://347b4a485f876e5479e09228d2764d661598e19d09200d7c8095d1f2a3a83ac1" gracePeriod=2 Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.154225 4810 generic.go:334] "Generic (PLEG): container finished" podID="ae59625d-9860-4641-b2e8-925ff530829a" containerID="347b4a485f876e5479e09228d2764d661598e19d09200d7c8095d1f2a3a83ac1" exitCode=0 Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.154263 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktdfq" event={"ID":"ae59625d-9860-4641-b2e8-925ff530829a","Type":"ContainerDied","Data":"347b4a485f876e5479e09228d2764d661598e19d09200d7c8095d1f2a3a83ac1"} Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.698979 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.797717 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd85x\" (UniqueName: \"kubernetes.io/projected/ae59625d-9860-4641-b2e8-925ff530829a-kube-api-access-rd85x\") pod \"ae59625d-9860-4641-b2e8-925ff530829a\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.797880 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-catalog-content\") pod \"ae59625d-9860-4641-b2e8-925ff530829a\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.797987 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-utilities\") pod \"ae59625d-9860-4641-b2e8-925ff530829a\" (UID: \"ae59625d-9860-4641-b2e8-925ff530829a\") " Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.799068 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-utilities" (OuterVolumeSpecName: "utilities") pod "ae59625d-9860-4641-b2e8-925ff530829a" (UID: "ae59625d-9860-4641-b2e8-925ff530829a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.804102 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae59625d-9860-4641-b2e8-925ff530829a-kube-api-access-rd85x" (OuterVolumeSpecName: "kube-api-access-rd85x") pod "ae59625d-9860-4641-b2e8-925ff530829a" (UID: "ae59625d-9860-4641-b2e8-925ff530829a"). InnerVolumeSpecName "kube-api-access-rd85x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.856431 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae59625d-9860-4641-b2e8-925ff530829a" (UID: "ae59625d-9860-4641-b2e8-925ff530829a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.899419 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.899455 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae59625d-9860-4641-b2e8-925ff530829a-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:02:19 crc kubenswrapper[4810]: I0110 07:02:19.899469 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd85x\" (UniqueName: \"kubernetes.io/projected/ae59625d-9860-4641-b2e8-925ff530829a-kube-api-access-rd85x\") on node \"crc\" DevicePath \"\"" Jan 10 07:02:20 crc kubenswrapper[4810]: I0110 07:02:20.162765 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ktdfq" event={"ID":"ae59625d-9860-4641-b2e8-925ff530829a","Type":"ContainerDied","Data":"790b9ab5d6d7fd856217460da7c04a3a7687a7843a9fb3818fcef6d0b9638ae7"} Jan 10 07:02:20 crc kubenswrapper[4810]: I0110 07:02:20.162811 4810 scope.go:117] "RemoveContainer" containerID="347b4a485f876e5479e09228d2764d661598e19d09200d7c8095d1f2a3a83ac1" Jan 10 07:02:20 crc kubenswrapper[4810]: I0110 07:02:20.162835 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ktdfq" Jan 10 07:02:20 crc kubenswrapper[4810]: I0110 07:02:20.200554 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ktdfq"] Jan 10 07:02:20 crc kubenswrapper[4810]: I0110 07:02:20.204519 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ktdfq"] Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.097883 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4nk8f"] Jan 10 07:02:21 crc kubenswrapper[4810]: E0110 07:02:21.098666 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae59625d-9860-4641-b2e8-925ff530829a" containerName="registry-server" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.098690 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae59625d-9860-4641-b2e8-925ff530829a" containerName="registry-server" Jan 10 07:02:21 crc kubenswrapper[4810]: E0110 07:02:21.098720 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae59625d-9860-4641-b2e8-925ff530829a" containerName="extract-utilities" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.098733 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae59625d-9860-4641-b2e8-925ff530829a" containerName="extract-utilities" Jan 10 07:02:21 crc kubenswrapper[4810]: E0110 07:02:21.098759 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae59625d-9860-4641-b2e8-925ff530829a" containerName="extract-content" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.098772 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae59625d-9860-4641-b2e8-925ff530829a" containerName="extract-content" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.098991 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae59625d-9860-4641-b2e8-925ff530829a" containerName="registry-server" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.100443 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.121102 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nk8f"] Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.218909 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-utilities\") pod \"redhat-marketplace-4nk8f\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.218976 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwq8\" (UniqueName: \"kubernetes.io/projected/d49a7045-8c15-4c1f-9926-533ab2805712-kube-api-access-cpwq8\") pod \"redhat-marketplace-4nk8f\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.219068 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-catalog-content\") pod \"redhat-marketplace-4nk8f\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.320063 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-catalog-content\") pod \"redhat-marketplace-4nk8f\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.320111 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-utilities\") pod \"redhat-marketplace-4nk8f\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.320149 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpwq8\" (UniqueName: \"kubernetes.io/projected/d49a7045-8c15-4c1f-9926-533ab2805712-kube-api-access-cpwq8\") pod \"redhat-marketplace-4nk8f\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.320676 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-catalog-content\") pod \"redhat-marketplace-4nk8f\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.320958 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-utilities\") pod \"redhat-marketplace-4nk8f\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.348555 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpwq8\" (UniqueName: \"kubernetes.io/projected/d49a7045-8c15-4c1f-9926-533ab2805712-kube-api-access-cpwq8\") pod \"redhat-marketplace-4nk8f\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.439745 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:21 crc kubenswrapper[4810]: I0110 07:02:21.701304 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae59625d-9860-4641-b2e8-925ff530829a" path="/var/lib/kubelet/pods/ae59625d-9860-4641-b2e8-925ff530829a/volumes" Jan 10 07:02:22 crc kubenswrapper[4810]: I0110 07:02:22.580615 4810 scope.go:117] "RemoveContainer" containerID="b21dc7274783719294f252647fc308afd95cc415ea1ea8acd65541f55d5a5d3f" Jan 10 07:02:31 crc kubenswrapper[4810]: I0110 07:02:31.861041 4810 scope.go:117] "RemoveContainer" containerID="17f13276b7b8d9c345a1d708eda1f68910ab62d6b8327042c20031c1dff2762e" Jan 10 07:02:32 crc kubenswrapper[4810]: I0110 07:02:32.742231 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nk8f"] Jan 10 07:02:33 crc kubenswrapper[4810]: I0110 07:02:33.265880 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nk8f" event={"ID":"d49a7045-8c15-4c1f-9926-533ab2805712","Type":"ContainerStarted","Data":"4fb82411e77074321b9d31def39395f2286be8f2ce47702195e08ec758a127b0"} Jan 10 07:02:34 crc kubenswrapper[4810]: E0110 07:02:34.340183 4810 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="rabbitmq:4.1.1-management" Jan 10 07:02:34 crc kubenswrapper[4810]: E0110 07:02:34.341248 4810 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="rabbitmq:4.1.1-management" Jan 10 07:02:34 crc kubenswrapper[4810]: E0110 07:02:34.341380 4810 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:rabbitmq:4.1.1-management,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqqdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_swift-kuttl-tests(13c176cb-a9d8-49ed-9d35-78a2975d9dd6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 10 07:02:34 crc kubenswrapper[4810]: E0110 07:02:34.342465 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="swift-kuttl-tests/rabbitmq-server-0" podUID="13c176cb-a9d8-49ed-9d35-78a2975d9dd6" Jan 10 07:02:35 crc kubenswrapper[4810]: I0110 07:02:35.284051 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-tp9xq" event={"ID":"31586912-6862-4b77-892d-76af90a5b22a","Type":"ContainerStarted","Data":"1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4"} Jan 10 07:02:35 crc kubenswrapper[4810]: I0110 07:02:35.285574 4810 generic.go:334] "Generic (PLEG): container finished" podID="d49a7045-8c15-4c1f-9926-533ab2805712" containerID="db959d34a576c3c485af366048f41d9c402296cb000502c532ebb1db458d2226" exitCode=0 Jan 10 07:02:35 crc kubenswrapper[4810]: I0110 07:02:35.285601 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nk8f" event={"ID":"d49a7045-8c15-4c1f-9926-533ab2805712","Type":"ContainerDied","Data":"db959d34a576c3c485af366048f41d9c402296cb000502c532ebb1db458d2226"} Jan 10 07:02:35 crc kubenswrapper[4810]: E0110 07:02:35.286976 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"rabbitmq:4.1.1-management\\\"\"" pod="swift-kuttl-tests/rabbitmq-server-0" podUID="13c176cb-a9d8-49ed-9d35-78a2975d9dd6" Jan 10 07:02:35 crc kubenswrapper[4810]: I0110 07:02:35.300497 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-tp9xq" podStartSLOduration=2.005926134 podStartE2EDuration="18.300475819s" podCreationTimestamp="2026-01-10 07:02:17 +0000 UTC" firstStartedPulling="2026-01-10 07:02:17.897166962 +0000 UTC m=+966.512659845" lastFinishedPulling="2026-01-10 07:02:34.191716647 +0000 UTC m=+982.807209530" observedRunningTime="2026-01-10 07:02:35.297240392 +0000 UTC m=+983.912733295" watchObservedRunningTime="2026-01-10 07:02:35.300475819 +0000 UTC m=+983.915968712" Jan 10 07:02:36 crc kubenswrapper[4810]: I0110 07:02:36.294871 4810 generic.go:334] "Generic (PLEG): container finished" podID="d49a7045-8c15-4c1f-9926-533ab2805712" containerID="dfde14cd44ea48ef9ab08e6c6a5312eae8b8b8a29623a7a4ecdbc02a8c42feea" exitCode=0 Jan 10 07:02:36 crc kubenswrapper[4810]: I0110 07:02:36.294966 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nk8f" event={"ID":"d49a7045-8c15-4c1f-9926-533ab2805712","Type":"ContainerDied","Data":"dfde14cd44ea48ef9ab08e6c6a5312eae8b8b8a29623a7a4ecdbc02a8c42feea"} Jan 10 07:02:37 crc kubenswrapper[4810]: I0110 07:02:37.305910 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nk8f" event={"ID":"d49a7045-8c15-4c1f-9926-533ab2805712","Type":"ContainerStarted","Data":"f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23"} Jan 10 07:02:37 crc kubenswrapper[4810]: I0110 07:02:37.332624 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4nk8f" podStartSLOduration=14.919284852 podStartE2EDuration="16.332598772s" podCreationTimestamp="2026-01-10 07:02:21 +0000 UTC" firstStartedPulling="2026-01-10 07:02:35.286931404 +0000 UTC m=+983.902424287" lastFinishedPulling="2026-01-10 07:02:36.700245324 +0000 UTC m=+985.315738207" observedRunningTime="2026-01-10 07:02:37.325316148 +0000 UTC m=+985.940809131" watchObservedRunningTime="2026-01-10 07:02:37.332598772 +0000 UTC m=+985.948091695" Jan 10 07:02:37 crc kubenswrapper[4810]: I0110 07:02:37.413932 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-tp9xq" Jan 10 07:02:37 crc kubenswrapper[4810]: I0110 07:02:37.414001 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-tp9xq" Jan 10 07:02:37 crc kubenswrapper[4810]: I0110 07:02:37.460890 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-tp9xq" Jan 10 07:02:41 crc kubenswrapper[4810]: I0110 07:02:41.440531 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:41 crc kubenswrapper[4810]: I0110 07:02:41.441029 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:41 crc kubenswrapper[4810]: I0110 07:02:41.498995 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:42 crc kubenswrapper[4810]: I0110 07:02:42.386183 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:46 crc kubenswrapper[4810]: I0110 07:02:46.895694 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nk8f"] Jan 10 07:02:46 crc kubenswrapper[4810]: I0110 07:02:46.896324 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4nk8f" podUID="d49a7045-8c15-4c1f-9926-533ab2805712" containerName="registry-server" containerID="cri-o://f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23" gracePeriod=2 Jan 10 07:02:47 crc kubenswrapper[4810]: I0110 07:02:47.474708 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-tp9xq" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.336766 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.391397 4810 generic.go:334] "Generic (PLEG): container finished" podID="d49a7045-8c15-4c1f-9926-533ab2805712" containerID="f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23" exitCode=0 Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.391478 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nk8f" event={"ID":"d49a7045-8c15-4c1f-9926-533ab2805712","Type":"ContainerDied","Data":"f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23"} Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.391489 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4nk8f" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.391549 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4nk8f" event={"ID":"d49a7045-8c15-4c1f-9926-533ab2805712","Type":"ContainerDied","Data":"4fb82411e77074321b9d31def39395f2286be8f2ce47702195e08ec758a127b0"} Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.391595 4810 scope.go:117] "RemoveContainer" containerID="f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.409978 4810 scope.go:117] "RemoveContainer" containerID="dfde14cd44ea48ef9ab08e6c6a5312eae8b8b8a29623a7a4ecdbc02a8c42feea" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.427658 4810 scope.go:117] "RemoveContainer" containerID="db959d34a576c3c485af366048f41d9c402296cb000502c532ebb1db458d2226" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.438678 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpwq8\" (UniqueName: \"kubernetes.io/projected/d49a7045-8c15-4c1f-9926-533ab2805712-kube-api-access-cpwq8\") pod \"d49a7045-8c15-4c1f-9926-533ab2805712\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.438744 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-utilities\") pod \"d49a7045-8c15-4c1f-9926-533ab2805712\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.438904 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-catalog-content\") pod \"d49a7045-8c15-4c1f-9926-533ab2805712\" (UID: \"d49a7045-8c15-4c1f-9926-533ab2805712\") " Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.439883 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-utilities" (OuterVolumeSpecName: "utilities") pod "d49a7045-8c15-4c1f-9926-533ab2805712" (UID: "d49a7045-8c15-4c1f-9926-533ab2805712"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.455443 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d49a7045-8c15-4c1f-9926-533ab2805712-kube-api-access-cpwq8" (OuterVolumeSpecName: "kube-api-access-cpwq8") pod "d49a7045-8c15-4c1f-9926-533ab2805712" (UID: "d49a7045-8c15-4c1f-9926-533ab2805712"). InnerVolumeSpecName "kube-api-access-cpwq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.455556 4810 scope.go:117] "RemoveContainer" containerID="f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23" Jan 10 07:02:49 crc kubenswrapper[4810]: E0110 07:02:49.455995 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23\": container with ID starting with f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23 not found: ID does not exist" containerID="f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.456038 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23"} err="failed to get container status \"f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23\": rpc error: code = NotFound desc = could not find container \"f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23\": container with ID starting with f2841fe3838a56e417265bcdebadbc5c650616ae2c268d53c468af235f15cc23 not found: ID does not exist" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.456067 4810 scope.go:117] "RemoveContainer" containerID="dfde14cd44ea48ef9ab08e6c6a5312eae8b8b8a29623a7a4ecdbc02a8c42feea" Jan 10 07:02:49 crc kubenswrapper[4810]: E0110 07:02:49.456574 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfde14cd44ea48ef9ab08e6c6a5312eae8b8b8a29623a7a4ecdbc02a8c42feea\": container with ID starting with dfde14cd44ea48ef9ab08e6c6a5312eae8b8b8a29623a7a4ecdbc02a8c42feea not found: ID does not exist" containerID="dfde14cd44ea48ef9ab08e6c6a5312eae8b8b8a29623a7a4ecdbc02a8c42feea" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.456607 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfde14cd44ea48ef9ab08e6c6a5312eae8b8b8a29623a7a4ecdbc02a8c42feea"} err="failed to get container status \"dfde14cd44ea48ef9ab08e6c6a5312eae8b8b8a29623a7a4ecdbc02a8c42feea\": rpc error: code = NotFound desc = could not find container \"dfde14cd44ea48ef9ab08e6c6a5312eae8b8b8a29623a7a4ecdbc02a8c42feea\": container with ID starting with dfde14cd44ea48ef9ab08e6c6a5312eae8b8b8a29623a7a4ecdbc02a8c42feea not found: ID does not exist" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.456632 4810 scope.go:117] "RemoveContainer" containerID="db959d34a576c3c485af366048f41d9c402296cb000502c532ebb1db458d2226" Jan 10 07:02:49 crc kubenswrapper[4810]: E0110 07:02:49.457045 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db959d34a576c3c485af366048f41d9c402296cb000502c532ebb1db458d2226\": container with ID starting with db959d34a576c3c485af366048f41d9c402296cb000502c532ebb1db458d2226 not found: ID does not exist" containerID="db959d34a576c3c485af366048f41d9c402296cb000502c532ebb1db458d2226" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.457081 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db959d34a576c3c485af366048f41d9c402296cb000502c532ebb1db458d2226"} err="failed to get container status \"db959d34a576c3c485af366048f41d9c402296cb000502c532ebb1db458d2226\": rpc error: code = NotFound desc = could not find container \"db959d34a576c3c485af366048f41d9c402296cb000502c532ebb1db458d2226\": container with ID starting with db959d34a576c3c485af366048f41d9c402296cb000502c532ebb1db458d2226 not found: ID does not exist" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.483883 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d49a7045-8c15-4c1f-9926-533ab2805712" (UID: "d49a7045-8c15-4c1f-9926-533ab2805712"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.539997 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.540048 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpwq8\" (UniqueName: \"kubernetes.io/projected/d49a7045-8c15-4c1f-9926-533ab2805712-kube-api-access-cpwq8\") on node \"crc\" DevicePath \"\"" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.540071 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d49a7045-8c15-4c1f-9926-533ab2805712-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.736611 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nk8f"] Jan 10 07:02:49 crc kubenswrapper[4810]: I0110 07:02:49.744117 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4nk8f"] Jan 10 07:02:50 crc kubenswrapper[4810]: I0110 07:02:50.882961 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:02:50 crc kubenswrapper[4810]: I0110 07:02:50.883327 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.348270 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv"] Jan 10 07:02:51 crc kubenswrapper[4810]: E0110 07:02:51.349082 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49a7045-8c15-4c1f-9926-533ab2805712" containerName="extract-content" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.349102 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49a7045-8c15-4c1f-9926-533ab2805712" containerName="extract-content" Jan 10 07:02:51 crc kubenswrapper[4810]: E0110 07:02:51.349117 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49a7045-8c15-4c1f-9926-533ab2805712" containerName="registry-server" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.349127 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49a7045-8c15-4c1f-9926-533ab2805712" containerName="registry-server" Jan 10 07:02:51 crc kubenswrapper[4810]: E0110 07:02:51.349148 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d49a7045-8c15-4c1f-9926-533ab2805712" containerName="extract-utilities" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.349155 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d49a7045-8c15-4c1f-9926-533ab2805712" containerName="extract-utilities" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.349360 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d49a7045-8c15-4c1f-9926-533ab2805712" containerName="registry-server" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.351484 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.356223 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv"] Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.357479 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-8bf9r" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.377095 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-bundle\") pod \"11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.377242 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fps29\" (UniqueName: \"kubernetes.io/projected/e6db2faa-2559-40fa-be95-29fb8673afa5-kube-api-access-fps29\") pod \"11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.377351 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-util\") pod \"11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.408613 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"13c176cb-a9d8-49ed-9d35-78a2975d9dd6","Type":"ContainerStarted","Data":"9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327"} Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.478675 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-bundle\") pod \"11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.479428 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-bundle\") pod \"11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.478815 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fps29\" (UniqueName: \"kubernetes.io/projected/e6db2faa-2559-40fa-be95-29fb8673afa5-kube-api-access-fps29\") pod \"11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.479554 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-util\") pod \"11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.479907 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-util\") pod \"11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.496849 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fps29\" (UniqueName: \"kubernetes.io/projected/e6db2faa-2559-40fa-be95-29fb8673afa5-kube-api-access-fps29\") pod \"11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.692180 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:02:51 crc kubenswrapper[4810]: I0110 07:02:51.702722 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d49a7045-8c15-4c1f-9926-533ab2805712" path="/var/lib/kubelet/pods/d49a7045-8c15-4c1f-9926-533ab2805712/volumes" Jan 10 07:02:52 crc kubenswrapper[4810]: I0110 07:02:52.121105 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv"] Jan 10 07:02:54 crc kubenswrapper[4810]: I0110 07:02:52.413874 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" event={"ID":"e6db2faa-2559-40fa-be95-29fb8673afa5","Type":"ContainerStarted","Data":"7f386e73c08f803e53bafbf7698b0abd6566db3d754bdb39146603a22e73e171"} Jan 10 07:02:55 crc kubenswrapper[4810]: I0110 07:02:55.449235 4810 generic.go:334] "Generic (PLEG): container finished" podID="e6db2faa-2559-40fa-be95-29fb8673afa5" containerID="ae38555bf51d6484c475bb991f05081f57c6cc3492e808531c0c8807b09648da" exitCode=0 Jan 10 07:02:55 crc kubenswrapper[4810]: I0110 07:02:55.449357 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" event={"ID":"e6db2faa-2559-40fa-be95-29fb8673afa5","Type":"ContainerDied","Data":"ae38555bf51d6484c475bb991f05081f57c6cc3492e808531c0c8807b09648da"} Jan 10 07:02:58 crc kubenswrapper[4810]: I0110 07:02:58.473987 4810 generic.go:334] "Generic (PLEG): container finished" podID="e6db2faa-2559-40fa-be95-29fb8673afa5" containerID="eef1aca1b73720c3bdf620a5345cdcc49f54062522f0a7537096b8d4154e6cdf" exitCode=0 Jan 10 07:02:58 crc kubenswrapper[4810]: I0110 07:02:58.474052 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" event={"ID":"e6db2faa-2559-40fa-be95-29fb8673afa5","Type":"ContainerDied","Data":"eef1aca1b73720c3bdf620a5345cdcc49f54062522f0a7537096b8d4154e6cdf"} Jan 10 07:02:59 crc kubenswrapper[4810]: I0110 07:02:59.483859 4810 generic.go:334] "Generic (PLEG): container finished" podID="e6db2faa-2559-40fa-be95-29fb8673afa5" containerID="bba019c7ea9d8f796435fdd27e6cb428e7ccf13234225b2876ac51ebfe2e0c03" exitCode=0 Jan 10 07:02:59 crc kubenswrapper[4810]: I0110 07:02:59.483927 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" event={"ID":"e6db2faa-2559-40fa-be95-29fb8673afa5","Type":"ContainerDied","Data":"bba019c7ea9d8f796435fdd27e6cb428e7ccf13234225b2876ac51ebfe2e0c03"} Jan 10 07:03:00 crc kubenswrapper[4810]: I0110 07:03:00.790278 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:03:00 crc kubenswrapper[4810]: I0110 07:03:00.837584 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fps29\" (UniqueName: \"kubernetes.io/projected/e6db2faa-2559-40fa-be95-29fb8673afa5-kube-api-access-fps29\") pod \"e6db2faa-2559-40fa-be95-29fb8673afa5\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " Jan 10 07:03:00 crc kubenswrapper[4810]: I0110 07:03:00.837676 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-bundle\") pod \"e6db2faa-2559-40fa-be95-29fb8673afa5\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " Jan 10 07:03:00 crc kubenswrapper[4810]: I0110 07:03:00.837720 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-util\") pod \"e6db2faa-2559-40fa-be95-29fb8673afa5\" (UID: \"e6db2faa-2559-40fa-be95-29fb8673afa5\") " Jan 10 07:03:00 crc kubenswrapper[4810]: I0110 07:03:00.838637 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-bundle" (OuterVolumeSpecName: "bundle") pod "e6db2faa-2559-40fa-be95-29fb8673afa5" (UID: "e6db2faa-2559-40fa-be95-29fb8673afa5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:03:00 crc kubenswrapper[4810]: I0110 07:03:00.844428 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6db2faa-2559-40fa-be95-29fb8673afa5-kube-api-access-fps29" (OuterVolumeSpecName: "kube-api-access-fps29") pod "e6db2faa-2559-40fa-be95-29fb8673afa5" (UID: "e6db2faa-2559-40fa-be95-29fb8673afa5"). InnerVolumeSpecName "kube-api-access-fps29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:03:00 crc kubenswrapper[4810]: I0110 07:03:00.858143 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-util" (OuterVolumeSpecName: "util") pod "e6db2faa-2559-40fa-be95-29fb8673afa5" (UID: "e6db2faa-2559-40fa-be95-29fb8673afa5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:03:00 crc kubenswrapper[4810]: I0110 07:03:00.939894 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fps29\" (UniqueName: \"kubernetes.io/projected/e6db2faa-2559-40fa-be95-29fb8673afa5-kube-api-access-fps29\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:00 crc kubenswrapper[4810]: I0110 07:03:00.939940 4810 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:00 crc kubenswrapper[4810]: I0110 07:03:00.939954 4810 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6db2faa-2559-40fa-be95-29fb8673afa5-util\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:01 crc kubenswrapper[4810]: I0110 07:03:01.499228 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" event={"ID":"e6db2faa-2559-40fa-be95-29fb8673afa5","Type":"ContainerDied","Data":"7f386e73c08f803e53bafbf7698b0abd6566db3d754bdb39146603a22e73e171"} Jan 10 07:03:01 crc kubenswrapper[4810]: I0110 07:03:01.499281 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f386e73c08f803e53bafbf7698b0abd6566db3d754bdb39146603a22e73e171" Jan 10 07:03:01 crc kubenswrapper[4810]: I0110 07:03:01.499452 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.325098 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq"] Jan 10 07:03:10 crc kubenswrapper[4810]: E0110 07:03:10.325831 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6db2faa-2559-40fa-be95-29fb8673afa5" containerName="extract" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.325845 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6db2faa-2559-40fa-be95-29fb8673afa5" containerName="extract" Jan 10 07:03:10 crc kubenswrapper[4810]: E0110 07:03:10.325865 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6db2faa-2559-40fa-be95-29fb8673afa5" containerName="util" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.325872 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6db2faa-2559-40fa-be95-29fb8673afa5" containerName="util" Jan 10 07:03:10 crc kubenswrapper[4810]: E0110 07:03:10.325888 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6db2faa-2559-40fa-be95-29fb8673afa5" containerName="pull" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.325897 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6db2faa-2559-40fa-be95-29fb8673afa5" containerName="pull" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.326038 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6db2faa-2559-40fa-be95-29fb8673afa5" containerName="extract" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.326544 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.329680 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.329994 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-k9xzk" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.340432 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq"] Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.429223 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-apiservice-cert\") pod \"keystone-operator-controller-manager-765cdbf886-wmmsq\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.429281 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f76bd\" (UniqueName: \"kubernetes.io/projected/1067de2e-5591-420e-a068-a3279fb17d44-kube-api-access-f76bd\") pod \"keystone-operator-controller-manager-765cdbf886-wmmsq\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.429357 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-webhook-cert\") pod \"keystone-operator-controller-manager-765cdbf886-wmmsq\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.530537 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f76bd\" (UniqueName: \"kubernetes.io/projected/1067de2e-5591-420e-a068-a3279fb17d44-kube-api-access-f76bd\") pod \"keystone-operator-controller-manager-765cdbf886-wmmsq\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.530677 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-webhook-cert\") pod \"keystone-operator-controller-manager-765cdbf886-wmmsq\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.530801 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-apiservice-cert\") pod \"keystone-operator-controller-manager-765cdbf886-wmmsq\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.536834 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-apiservice-cert\") pod \"keystone-operator-controller-manager-765cdbf886-wmmsq\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.541438 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-webhook-cert\") pod \"keystone-operator-controller-manager-765cdbf886-wmmsq\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.551663 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f76bd\" (UniqueName: \"kubernetes.io/projected/1067de2e-5591-420e-a068-a3279fb17d44-kube-api-access-f76bd\") pod \"keystone-operator-controller-manager-765cdbf886-wmmsq\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:10 crc kubenswrapper[4810]: I0110 07:03:10.647706 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:11 crc kubenswrapper[4810]: I0110 07:03:11.101272 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq"] Jan 10 07:03:11 crc kubenswrapper[4810]: I0110 07:03:11.580458 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" event={"ID":"1067de2e-5591-420e-a068-a3279fb17d44","Type":"ContainerStarted","Data":"bb99ff9b3dbc10a84952f9f63279c94eedb02f677b5de3618585287576b29c00"} Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.289401 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4b549"] Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.291835 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.302158 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4b549"] Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.478620 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-utilities\") pod \"certified-operators-4b549\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.478719 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s592\" (UniqueName: \"kubernetes.io/projected/dca32237-a0ff-45fe-b207-7a4ab529a88d-kube-api-access-5s592\") pod \"certified-operators-4b549\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.478782 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-catalog-content\") pod \"certified-operators-4b549\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.580018 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-catalog-content\") pod \"certified-operators-4b549\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.580101 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-utilities\") pod \"certified-operators-4b549\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.580156 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s592\" (UniqueName: \"kubernetes.io/projected/dca32237-a0ff-45fe-b207-7a4ab529a88d-kube-api-access-5s592\") pod \"certified-operators-4b549\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.580647 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-catalog-content\") pod \"certified-operators-4b549\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.580748 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-utilities\") pod \"certified-operators-4b549\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.598064 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s592\" (UniqueName: \"kubernetes.io/projected/dca32237-a0ff-45fe-b207-7a4ab529a88d-kube-api-access-5s592\") pod \"certified-operators-4b549\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:13 crc kubenswrapper[4810]: I0110 07:03:13.614537 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.685866 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fcjnm"] Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.688042 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.705923 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fcjnm"] Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.807548 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-catalog-content\") pod \"redhat-operators-fcjnm\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.807667 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6rhp\" (UniqueName: \"kubernetes.io/projected/3445e5da-a88c-4868-92da-6677bc21e851-kube-api-access-k6rhp\") pod \"redhat-operators-fcjnm\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.807898 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-utilities\") pod \"redhat-operators-fcjnm\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.908811 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-catalog-content\") pod \"redhat-operators-fcjnm\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.908865 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6rhp\" (UniqueName: \"kubernetes.io/projected/3445e5da-a88c-4868-92da-6677bc21e851-kube-api-access-k6rhp\") pod \"redhat-operators-fcjnm\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.908931 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-utilities\") pod \"redhat-operators-fcjnm\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.909426 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-catalog-content\") pod \"redhat-operators-fcjnm\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.909472 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-utilities\") pod \"redhat-operators-fcjnm\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:14 crc kubenswrapper[4810]: I0110 07:03:14.924766 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6rhp\" (UniqueName: \"kubernetes.io/projected/3445e5da-a88c-4868-92da-6677bc21e851-kube-api-access-k6rhp\") pod \"redhat-operators-fcjnm\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:15 crc kubenswrapper[4810]: I0110 07:03:15.013207 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:15 crc kubenswrapper[4810]: I0110 07:03:15.449802 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fcjnm"] Jan 10 07:03:15 crc kubenswrapper[4810]: I0110 07:03:15.547279 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4b549"] Jan 10 07:03:15 crc kubenswrapper[4810]: W0110 07:03:15.548267 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddca32237_a0ff_45fe_b207_7a4ab529a88d.slice/crio-f7928a12f9884909871a48c9244c6eb22e01e832337de8fd4a4971f913f1849b WatchSource:0}: Error finding container f7928a12f9884909871a48c9244c6eb22e01e832337de8fd4a4971f913f1849b: Status 404 returned error can't find the container with id f7928a12f9884909871a48c9244c6eb22e01e832337de8fd4a4971f913f1849b Jan 10 07:03:15 crc kubenswrapper[4810]: I0110 07:03:15.624783 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4b549" event={"ID":"dca32237-a0ff-45fe-b207-7a4ab529a88d","Type":"ContainerStarted","Data":"f7928a12f9884909871a48c9244c6eb22e01e832337de8fd4a4971f913f1849b"} Jan 10 07:03:15 crc kubenswrapper[4810]: I0110 07:03:15.626985 4810 generic.go:334] "Generic (PLEG): container finished" podID="3445e5da-a88c-4868-92da-6677bc21e851" containerID="d73748f9c7ac2111115ad64fcff68f5c861a8f896e38902a3a9c9bfd68439700" exitCode=0 Jan 10 07:03:15 crc kubenswrapper[4810]: I0110 07:03:15.627061 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjnm" event={"ID":"3445e5da-a88c-4868-92da-6677bc21e851","Type":"ContainerDied","Data":"d73748f9c7ac2111115ad64fcff68f5c861a8f896e38902a3a9c9bfd68439700"} Jan 10 07:03:15 crc kubenswrapper[4810]: I0110 07:03:15.627087 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjnm" event={"ID":"3445e5da-a88c-4868-92da-6677bc21e851","Type":"ContainerStarted","Data":"9b90512983e73cf559950f67c2d51e068b08a966f235f5520f280fd439bee951"} Jan 10 07:03:15 crc kubenswrapper[4810]: I0110 07:03:15.628250 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 10 07:03:15 crc kubenswrapper[4810]: I0110 07:03:15.628688 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" event={"ID":"1067de2e-5591-420e-a068-a3279fb17d44","Type":"ContainerStarted","Data":"1d94ec8a64b2454754fe232550dc25e5ca0da17af4ef46b3fcc62d10e5d9ff64"} Jan 10 07:03:15 crc kubenswrapper[4810]: I0110 07:03:15.628922 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:15 crc kubenswrapper[4810]: I0110 07:03:15.660593 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" podStartSLOduration=1.548041446 podStartE2EDuration="5.660558138s" podCreationTimestamp="2026-01-10 07:03:10 +0000 UTC" firstStartedPulling="2026-01-10 07:03:11.120157975 +0000 UTC m=+1019.735650858" lastFinishedPulling="2026-01-10 07:03:15.232674657 +0000 UTC m=+1023.848167550" observedRunningTime="2026-01-10 07:03:15.660452916 +0000 UTC m=+1024.275945789" watchObservedRunningTime="2026-01-10 07:03:15.660558138 +0000 UTC m=+1024.276051021" Jan 10 07:03:16 crc kubenswrapper[4810]: I0110 07:03:16.645296 4810 generic.go:334] "Generic (PLEG): container finished" podID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerID="b8eb44792446196bd611185330c9518c41df02895818ab4abb0bd27e02cd7bf1" exitCode=0 Jan 10 07:03:16 crc kubenswrapper[4810]: I0110 07:03:16.645656 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4b549" event={"ID":"dca32237-a0ff-45fe-b207-7a4ab529a88d","Type":"ContainerDied","Data":"b8eb44792446196bd611185330c9518c41df02895818ab4abb0bd27e02cd7bf1"} Jan 10 07:03:16 crc kubenswrapper[4810]: I0110 07:03:16.653362 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjnm" event={"ID":"3445e5da-a88c-4868-92da-6677bc21e851","Type":"ContainerStarted","Data":"cff8ccd27ed5ba0a52d9159f53554b3138fd37b60821efe8ddf20f48fdfdd412"} Jan 10 07:03:17 crc kubenswrapper[4810]: I0110 07:03:17.661869 4810 generic.go:334] "Generic (PLEG): container finished" podID="3445e5da-a88c-4868-92da-6677bc21e851" containerID="cff8ccd27ed5ba0a52d9159f53554b3138fd37b60821efe8ddf20f48fdfdd412" exitCode=0 Jan 10 07:03:17 crc kubenswrapper[4810]: I0110 07:03:17.661981 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjnm" event={"ID":"3445e5da-a88c-4868-92da-6677bc21e851","Type":"ContainerDied","Data":"cff8ccd27ed5ba0a52d9159f53554b3138fd37b60821efe8ddf20f48fdfdd412"} Jan 10 07:03:17 crc kubenswrapper[4810]: I0110 07:03:17.665356 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4b549" event={"ID":"dca32237-a0ff-45fe-b207-7a4ab529a88d","Type":"ContainerStarted","Data":"2483a2202462b5c46eb72675a598cbeeefbb7824e8008e6b5480d2d5c8f80545"} Jan 10 07:03:18 crc kubenswrapper[4810]: I0110 07:03:18.674714 4810 generic.go:334] "Generic (PLEG): container finished" podID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerID="2483a2202462b5c46eb72675a598cbeeefbb7824e8008e6b5480d2d5c8f80545" exitCode=0 Jan 10 07:03:18 crc kubenswrapper[4810]: I0110 07:03:18.674786 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4b549" event={"ID":"dca32237-a0ff-45fe-b207-7a4ab529a88d","Type":"ContainerDied","Data":"2483a2202462b5c46eb72675a598cbeeefbb7824e8008e6b5480d2d5c8f80545"} Jan 10 07:03:18 crc kubenswrapper[4810]: I0110 07:03:18.678108 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjnm" event={"ID":"3445e5da-a88c-4868-92da-6677bc21e851","Type":"ContainerStarted","Data":"ab5cd12e6cf183d05768fb6e0b880b1b9cebe1de8fead7be345c168729b449b8"} Jan 10 07:03:18 crc kubenswrapper[4810]: I0110 07:03:18.726887 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fcjnm" podStartSLOduration=2.242332125 podStartE2EDuration="4.726861557s" podCreationTimestamp="2026-01-10 07:03:14 +0000 UTC" firstStartedPulling="2026-01-10 07:03:15.628013748 +0000 UTC m=+1024.243506631" lastFinishedPulling="2026-01-10 07:03:18.11254312 +0000 UTC m=+1026.728036063" observedRunningTime="2026-01-10 07:03:18.719664235 +0000 UTC m=+1027.335157158" watchObservedRunningTime="2026-01-10 07:03:18.726861557 +0000 UTC m=+1027.342354480" Jan 10 07:03:19 crc kubenswrapper[4810]: I0110 07:03:19.688551 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4b549" event={"ID":"dca32237-a0ff-45fe-b207-7a4ab529a88d","Type":"ContainerStarted","Data":"4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e"} Jan 10 07:03:19 crc kubenswrapper[4810]: I0110 07:03:19.704505 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4b549" podStartSLOduration=4.197396795 podStartE2EDuration="6.704464487s" podCreationTimestamp="2026-01-10 07:03:13 +0000 UTC" firstStartedPulling="2026-01-10 07:03:16.648536006 +0000 UTC m=+1025.264028899" lastFinishedPulling="2026-01-10 07:03:19.155603698 +0000 UTC m=+1027.771096591" observedRunningTime="2026-01-10 07:03:19.702942721 +0000 UTC m=+1028.318435614" watchObservedRunningTime="2026-01-10 07:03:19.704464487 +0000 UTC m=+1028.319957400" Jan 10 07:03:20 crc kubenswrapper[4810]: I0110 07:03:20.652585 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:03:20 crc kubenswrapper[4810]: I0110 07:03:20.882942 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:03:20 crc kubenswrapper[4810]: I0110 07:03:20.883006 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:03:23 crc kubenswrapper[4810]: I0110 07:03:23.614813 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:23 crc kubenswrapper[4810]: I0110 07:03:23.615153 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:23 crc kubenswrapper[4810]: I0110 07:03:23.676124 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.676279 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4"] Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.677619 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.679295 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-db-secret" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.681887 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-create-w9npk"] Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.682684 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-w9npk" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.701831 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4"] Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.706677 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-w9npk"] Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.761597 4810 generic.go:334] "Generic (PLEG): container finished" podID="13c176cb-a9d8-49ed-9d35-78a2975d9dd6" containerID="9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327" exitCode=0 Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.761835 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"13c176cb-a9d8-49ed-9d35-78a2975d9dd6","Type":"ContainerDied","Data":"9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327"} Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.857560 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vprnx\" (UniqueName: \"kubernetes.io/projected/906dd112-4b63-4f4a-af23-98e0b48be7f3-kube-api-access-vprnx\") pod \"keystone-db-create-w9npk\" (UID: \"906dd112-4b63-4f4a-af23-98e0b48be7f3\") " pod="swift-kuttl-tests/keystone-db-create-w9npk" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.857895 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/906dd112-4b63-4f4a-af23-98e0b48be7f3-operator-scripts\") pod \"keystone-db-create-w9npk\" (UID: \"906dd112-4b63-4f4a-af23-98e0b48be7f3\") " pod="swift-kuttl-tests/keystone-db-create-w9npk" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.858051 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7bq9\" (UniqueName: \"kubernetes.io/projected/f4f038cf-6ade-41e7-a85d-0c73babd42a2-kube-api-access-p7bq9\") pod \"keystone-c4b1-account-create-update-4b6n4\" (UID: \"f4f038cf-6ade-41e7-a85d-0c73babd42a2\") " pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.858251 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f038cf-6ade-41e7-a85d-0c73babd42a2-operator-scripts\") pod \"keystone-c4b1-account-create-update-4b6n4\" (UID: \"f4f038cf-6ade-41e7-a85d-0c73babd42a2\") " pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.959769 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vprnx\" (UniqueName: \"kubernetes.io/projected/906dd112-4b63-4f4a-af23-98e0b48be7f3-kube-api-access-vprnx\") pod \"keystone-db-create-w9npk\" (UID: \"906dd112-4b63-4f4a-af23-98e0b48be7f3\") " pod="swift-kuttl-tests/keystone-db-create-w9npk" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.960164 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/906dd112-4b63-4f4a-af23-98e0b48be7f3-operator-scripts\") pod \"keystone-db-create-w9npk\" (UID: \"906dd112-4b63-4f4a-af23-98e0b48be7f3\") " pod="swift-kuttl-tests/keystone-db-create-w9npk" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.960243 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7bq9\" (UniqueName: \"kubernetes.io/projected/f4f038cf-6ade-41e7-a85d-0c73babd42a2-kube-api-access-p7bq9\") pod \"keystone-c4b1-account-create-update-4b6n4\" (UID: \"f4f038cf-6ade-41e7-a85d-0c73babd42a2\") " pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.960380 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f038cf-6ade-41e7-a85d-0c73babd42a2-operator-scripts\") pod \"keystone-c4b1-account-create-update-4b6n4\" (UID: \"f4f038cf-6ade-41e7-a85d-0c73babd42a2\") " pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.960918 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/906dd112-4b63-4f4a-af23-98e0b48be7f3-operator-scripts\") pod \"keystone-db-create-w9npk\" (UID: \"906dd112-4b63-4f4a-af23-98e0b48be7f3\") " pod="swift-kuttl-tests/keystone-db-create-w9npk" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.961805 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f038cf-6ade-41e7-a85d-0c73babd42a2-operator-scripts\") pod \"keystone-c4b1-account-create-update-4b6n4\" (UID: \"f4f038cf-6ade-41e7-a85d-0c73babd42a2\") " pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.975692 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vprnx\" (UniqueName: \"kubernetes.io/projected/906dd112-4b63-4f4a-af23-98e0b48be7f3-kube-api-access-vprnx\") pod \"keystone-db-create-w9npk\" (UID: \"906dd112-4b63-4f4a-af23-98e0b48be7f3\") " pod="swift-kuttl-tests/keystone-db-create-w9npk" Jan 10 07:03:24 crc kubenswrapper[4810]: I0110 07:03:24.979037 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7bq9\" (UniqueName: \"kubernetes.io/projected/f4f038cf-6ade-41e7-a85d-0c73babd42a2-kube-api-access-p7bq9\") pod \"keystone-c4b1-account-create-update-4b6n4\" (UID: \"f4f038cf-6ade-41e7-a85d-0c73babd42a2\") " pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.013678 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.013737 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.024910 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.056616 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-w9npk" Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.065826 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.401949 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4"] Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.642235 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-create-w9npk"] Jan 10 07:03:25 crc kubenswrapper[4810]: W0110 07:03:25.645659 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906dd112_4b63_4f4a_af23_98e0b48be7f3.slice/crio-e73a2f710ea548f5f5ce67eee705a19a0e31eeaba33acd89e00baa44dc430a52 WatchSource:0}: Error finding container e73a2f710ea548f5f5ce67eee705a19a0e31eeaba33acd89e00baa44dc430a52: Status 404 returned error can't find the container with id e73a2f710ea548f5f5ce67eee705a19a0e31eeaba33acd89e00baa44dc430a52 Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.771497 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-w9npk" event={"ID":"906dd112-4b63-4f4a-af23-98e0b48be7f3","Type":"ContainerStarted","Data":"e73a2f710ea548f5f5ce67eee705a19a0e31eeaba33acd89e00baa44dc430a52"} Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.773508 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"13c176cb-a9d8-49ed-9d35-78a2975d9dd6","Type":"ContainerStarted","Data":"accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3"} Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.774727 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.779455 4810 generic.go:334] "Generic (PLEG): container finished" podID="f4f038cf-6ade-41e7-a85d-0c73babd42a2" containerID="16ca45ab3bd2c20a8365b9833e6eb04342f8da31b5113f6b383d6694ce0fb21e" exitCode=0 Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.779875 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" event={"ID":"f4f038cf-6ade-41e7-a85d-0c73babd42a2","Type":"ContainerDied","Data":"16ca45ab3bd2c20a8365b9833e6eb04342f8da31b5113f6b383d6694ce0fb21e"} Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.779917 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" event={"ID":"f4f038cf-6ade-41e7-a85d-0c73babd42a2","Type":"ContainerStarted","Data":"a5e2814e34ec7bfe84d96bca3c8c5690b902058fdfd95669094793eb9c9923e9"} Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.800536 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/rabbitmq-server-0" podStartSLOduration=38.491778528 podStartE2EDuration="1m12.800498539s" podCreationTimestamp="2026-01-10 07:02:13 +0000 UTC" firstStartedPulling="2026-01-10 07:02:15.534952227 +0000 UTC m=+964.150445130" lastFinishedPulling="2026-01-10 07:02:49.843672218 +0000 UTC m=+998.459165141" observedRunningTime="2026-01-10 07:03:25.798761287 +0000 UTC m=+1034.414254170" watchObservedRunningTime="2026-01-10 07:03:25.800498539 +0000 UTC m=+1034.415991422" Jan 10 07:03:25 crc kubenswrapper[4810]: I0110 07:03:25.851438 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:26 crc kubenswrapper[4810]: I0110 07:03:26.789580 4810 generic.go:334] "Generic (PLEG): container finished" podID="906dd112-4b63-4f4a-af23-98e0b48be7f3" containerID="05975c5ea8093b17636b4b2fce9a31166cc2105b0b1b651e21dcd44845584d1e" exitCode=0 Jan 10 07:03:26 crc kubenswrapper[4810]: I0110 07:03:26.789657 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-w9npk" event={"ID":"906dd112-4b63-4f4a-af23-98e0b48be7f3","Type":"ContainerDied","Data":"05975c5ea8093b17636b4b2fce9a31166cc2105b0b1b651e21dcd44845584d1e"} Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.212913 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.320415 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7bq9\" (UniqueName: \"kubernetes.io/projected/f4f038cf-6ade-41e7-a85d-0c73babd42a2-kube-api-access-p7bq9\") pod \"f4f038cf-6ade-41e7-a85d-0c73babd42a2\" (UID: \"f4f038cf-6ade-41e7-a85d-0c73babd42a2\") " Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.321039 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f038cf-6ade-41e7-a85d-0c73babd42a2-operator-scripts\") pod \"f4f038cf-6ade-41e7-a85d-0c73babd42a2\" (UID: \"f4f038cf-6ade-41e7-a85d-0c73babd42a2\") " Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.321656 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4f038cf-6ade-41e7-a85d-0c73babd42a2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4f038cf-6ade-41e7-a85d-0c73babd42a2" (UID: "f4f038cf-6ade-41e7-a85d-0c73babd42a2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.321851 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4f038cf-6ade-41e7-a85d-0c73babd42a2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.330615 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f038cf-6ade-41e7-a85d-0c73babd42a2-kube-api-access-p7bq9" (OuterVolumeSpecName: "kube-api-access-p7bq9") pod "f4f038cf-6ade-41e7-a85d-0c73babd42a2" (UID: "f4f038cf-6ade-41e7-a85d-0c73babd42a2"). InnerVolumeSpecName "kube-api-access-p7bq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.423773 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7bq9\" (UniqueName: \"kubernetes.io/projected/f4f038cf-6ade-41e7-a85d-0c73babd42a2-kube-api-access-p7bq9\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.477910 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fcjnm"] Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.800395 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" event={"ID":"f4f038cf-6ade-41e7-a85d-0c73babd42a2","Type":"ContainerDied","Data":"a5e2814e34ec7bfe84d96bca3c8c5690b902058fdfd95669094793eb9c9923e9"} Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.800460 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5e2814e34ec7bfe84d96bca3c8c5690b902058fdfd95669094793eb9c9923e9" Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.800624 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4" Jan 10 07:03:27 crc kubenswrapper[4810]: I0110 07:03:27.800612 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fcjnm" podUID="3445e5da-a88c-4868-92da-6677bc21e851" containerName="registry-server" containerID="cri-o://ab5cd12e6cf183d05768fb6e0b880b1b9cebe1de8fead7be345c168729b449b8" gracePeriod=2 Jan 10 07:03:28 crc kubenswrapper[4810]: I0110 07:03:28.090364 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-w9npk" Jan 10 07:03:28 crc kubenswrapper[4810]: I0110 07:03:28.234037 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/906dd112-4b63-4f4a-af23-98e0b48be7f3-operator-scripts\") pod \"906dd112-4b63-4f4a-af23-98e0b48be7f3\" (UID: \"906dd112-4b63-4f4a-af23-98e0b48be7f3\") " Jan 10 07:03:28 crc kubenswrapper[4810]: I0110 07:03:28.234256 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vprnx\" (UniqueName: \"kubernetes.io/projected/906dd112-4b63-4f4a-af23-98e0b48be7f3-kube-api-access-vprnx\") pod \"906dd112-4b63-4f4a-af23-98e0b48be7f3\" (UID: \"906dd112-4b63-4f4a-af23-98e0b48be7f3\") " Jan 10 07:03:28 crc kubenswrapper[4810]: I0110 07:03:28.234583 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/906dd112-4b63-4f4a-af23-98e0b48be7f3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "906dd112-4b63-4f4a-af23-98e0b48be7f3" (UID: "906dd112-4b63-4f4a-af23-98e0b48be7f3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:03:28 crc kubenswrapper[4810]: I0110 07:03:28.238588 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906dd112-4b63-4f4a-af23-98e0b48be7f3-kube-api-access-vprnx" (OuterVolumeSpecName: "kube-api-access-vprnx") pod "906dd112-4b63-4f4a-af23-98e0b48be7f3" (UID: "906dd112-4b63-4f4a-af23-98e0b48be7f3"). InnerVolumeSpecName "kube-api-access-vprnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:03:28 crc kubenswrapper[4810]: I0110 07:03:28.336047 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vprnx\" (UniqueName: \"kubernetes.io/projected/906dd112-4b63-4f4a-af23-98e0b48be7f3-kube-api-access-vprnx\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:28 crc kubenswrapper[4810]: I0110 07:03:28.336341 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/906dd112-4b63-4f4a-af23-98e0b48be7f3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:28 crc kubenswrapper[4810]: I0110 07:03:28.809889 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-create-w9npk" event={"ID":"906dd112-4b63-4f4a-af23-98e0b48be7f3","Type":"ContainerDied","Data":"e73a2f710ea548f5f5ce67eee705a19a0e31eeaba33acd89e00baa44dc430a52"} Jan 10 07:03:28 crc kubenswrapper[4810]: I0110 07:03:28.809928 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e73a2f710ea548f5f5ce67eee705a19a0e31eeaba33acd89e00baa44dc430a52" Jan 10 07:03:28 crc kubenswrapper[4810]: I0110 07:03:28.809996 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-create-w9npk" Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.291059 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-index-rp747"] Jan 10 07:03:29 crc kubenswrapper[4810]: E0110 07:03:29.291464 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906dd112-4b63-4f4a-af23-98e0b48be7f3" containerName="mariadb-database-create" Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.291484 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="906dd112-4b63-4f4a-af23-98e0b48be7f3" containerName="mariadb-database-create" Jan 10 07:03:29 crc kubenswrapper[4810]: E0110 07:03:29.291501 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f038cf-6ade-41e7-a85d-0c73babd42a2" containerName="mariadb-account-create-update" Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.291513 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f038cf-6ade-41e7-a85d-0c73babd42a2" containerName="mariadb-account-create-update" Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.291888 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f038cf-6ade-41e7-a85d-0c73babd42a2" containerName="mariadb-account-create-update" Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.291946 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="906dd112-4b63-4f4a-af23-98e0b48be7f3" containerName="mariadb-database-create" Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.292766 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-rp747" Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.295906 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-index-dockercfg-jbcvd" Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.308538 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-rp747"] Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.451063 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99mnh\" (UniqueName: \"kubernetes.io/projected/1ab9e5c2-daee-4d73-b55e-df866fba1922-kube-api-access-99mnh\") pod \"barbican-operator-index-rp747\" (UID: \"1ab9e5c2-daee-4d73-b55e-df866fba1922\") " pod="openstack-operators/barbican-operator-index-rp747" Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.553046 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99mnh\" (UniqueName: \"kubernetes.io/projected/1ab9e5c2-daee-4d73-b55e-df866fba1922-kube-api-access-99mnh\") pod \"barbican-operator-index-rp747\" (UID: \"1ab9e5c2-daee-4d73-b55e-df866fba1922\") " pod="openstack-operators/barbican-operator-index-rp747" Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.831706 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99mnh\" (UniqueName: \"kubernetes.io/projected/1ab9e5c2-daee-4d73-b55e-df866fba1922-kube-api-access-99mnh\") pod \"barbican-operator-index-rp747\" (UID: \"1ab9e5c2-daee-4d73-b55e-df866fba1922\") " pod="openstack-operators/barbican-operator-index-rp747" Jan 10 07:03:29 crc kubenswrapper[4810]: I0110 07:03:29.918089 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-rp747" Jan 10 07:03:30 crc kubenswrapper[4810]: I0110 07:03:30.394798 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-index-rp747"] Jan 10 07:03:30 crc kubenswrapper[4810]: I0110 07:03:30.830546 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-rp747" event={"ID":"1ab9e5c2-daee-4d73-b55e-df866fba1922","Type":"ContainerStarted","Data":"b727f1d5f91d4c15691c3f321d2700c44961d8bd49e5b04a4502b417776cb168"} Jan 10 07:03:30 crc kubenswrapper[4810]: I0110 07:03:30.833987 4810 generic.go:334] "Generic (PLEG): container finished" podID="3445e5da-a88c-4868-92da-6677bc21e851" containerID="ab5cd12e6cf183d05768fb6e0b880b1b9cebe1de8fead7be345c168729b449b8" exitCode=0 Jan 10 07:03:30 crc kubenswrapper[4810]: I0110 07:03:30.834039 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjnm" event={"ID":"3445e5da-a88c-4868-92da-6677bc21e851","Type":"ContainerDied","Data":"ab5cd12e6cf183d05768fb6e0b880b1b9cebe1de8fead7be345c168729b449b8"} Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.766931 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.843863 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fcjnm" event={"ID":"3445e5da-a88c-4868-92da-6677bc21e851","Type":"ContainerDied","Data":"9b90512983e73cf559950f67c2d51e068b08a966f235f5520f280fd439bee951"} Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.843929 4810 scope.go:117] "RemoveContainer" containerID="ab5cd12e6cf183d05768fb6e0b880b1b9cebe1de8fead7be345c168729b449b8" Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.844040 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fcjnm" Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.866096 4810 scope.go:117] "RemoveContainer" containerID="cff8ccd27ed5ba0a52d9159f53554b3138fd37b60821efe8ddf20f48fdfdd412" Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.889391 4810 scope.go:117] "RemoveContainer" containerID="d73748f9c7ac2111115ad64fcff68f5c861a8f896e38902a3a9c9bfd68439700" Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.890179 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-catalog-content\") pod \"3445e5da-a88c-4868-92da-6677bc21e851\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.890671 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6rhp\" (UniqueName: \"kubernetes.io/projected/3445e5da-a88c-4868-92da-6677bc21e851-kube-api-access-k6rhp\") pod \"3445e5da-a88c-4868-92da-6677bc21e851\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.890832 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-utilities\") pod \"3445e5da-a88c-4868-92da-6677bc21e851\" (UID: \"3445e5da-a88c-4868-92da-6677bc21e851\") " Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.892317 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-utilities" (OuterVolumeSpecName: "utilities") pod "3445e5da-a88c-4868-92da-6677bc21e851" (UID: "3445e5da-a88c-4868-92da-6677bc21e851"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.902494 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3445e5da-a88c-4868-92da-6677bc21e851-kube-api-access-k6rhp" (OuterVolumeSpecName: "kube-api-access-k6rhp") pod "3445e5da-a88c-4868-92da-6677bc21e851" (UID: "3445e5da-a88c-4868-92da-6677bc21e851"). InnerVolumeSpecName "kube-api-access-k6rhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.993325 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6rhp\" (UniqueName: \"kubernetes.io/projected/3445e5da-a88c-4868-92da-6677bc21e851-kube-api-access-k6rhp\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:31 crc kubenswrapper[4810]: I0110 07:03:31.993358 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:32 crc kubenswrapper[4810]: I0110 07:03:32.019049 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3445e5da-a88c-4868-92da-6677bc21e851" (UID: "3445e5da-a88c-4868-92da-6677bc21e851"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:03:32 crc kubenswrapper[4810]: I0110 07:03:32.094331 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3445e5da-a88c-4868-92da-6677bc21e851-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:32 crc kubenswrapper[4810]: I0110 07:03:32.194116 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fcjnm"] Jan 10 07:03:32 crc kubenswrapper[4810]: I0110 07:03:32.202087 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fcjnm"] Jan 10 07:03:32 crc kubenswrapper[4810]: I0110 07:03:32.851379 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-rp747" event={"ID":"1ab9e5c2-daee-4d73-b55e-df866fba1922","Type":"ContainerStarted","Data":"45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938"} Jan 10 07:03:33 crc kubenswrapper[4810]: I0110 07:03:33.662423 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:33 crc kubenswrapper[4810]: I0110 07:03:33.699559 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-index-rp747" podStartSLOduration=2.497441528 podStartE2EDuration="4.699540304s" podCreationTimestamp="2026-01-10 07:03:29 +0000 UTC" firstStartedPulling="2026-01-10 07:03:30.404573557 +0000 UTC m=+1039.020066480" lastFinishedPulling="2026-01-10 07:03:32.606672373 +0000 UTC m=+1041.222165256" observedRunningTime="2026-01-10 07:03:32.872705276 +0000 UTC m=+1041.488198159" watchObservedRunningTime="2026-01-10 07:03:33.699540304 +0000 UTC m=+1042.315033197" Jan 10 07:03:33 crc kubenswrapper[4810]: I0110 07:03:33.703054 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3445e5da-a88c-4868-92da-6677bc21e851" path="/var/lib/kubelet/pods/3445e5da-a88c-4868-92da-6677bc21e851/volumes" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.029803 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.610155 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-db-sync-894vg"] Jan 10 07:03:35 crc kubenswrapper[4810]: E0110 07:03:35.610423 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3445e5da-a88c-4868-92da-6677bc21e851" containerName="extract-utilities" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.610435 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3445e5da-a88c-4868-92da-6677bc21e851" containerName="extract-utilities" Jan 10 07:03:35 crc kubenswrapper[4810]: E0110 07:03:35.610446 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3445e5da-a88c-4868-92da-6677bc21e851" containerName="extract-content" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.610454 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3445e5da-a88c-4868-92da-6677bc21e851" containerName="extract-content" Jan 10 07:03:35 crc kubenswrapper[4810]: E0110 07:03:35.610461 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3445e5da-a88c-4868-92da-6677bc21e851" containerName="registry-server" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.610469 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3445e5da-a88c-4868-92da-6677bc21e851" containerName="registry-server" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.610580 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3445e5da-a88c-4868-92da-6677bc21e851" containerName="registry-server" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.610974 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-894vg" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.613780 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.613914 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.614120 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.618425 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-vlnjr" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.620298 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-894vg"] Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.743246 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f325c4-37ce-4a2a-910e-817636cc1dc1-config-data\") pod \"keystone-db-sync-894vg\" (UID: \"09f325c4-37ce-4a2a-910e-817636cc1dc1\") " pod="swift-kuttl-tests/keystone-db-sync-894vg" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.743337 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7bj8\" (UniqueName: \"kubernetes.io/projected/09f325c4-37ce-4a2a-910e-817636cc1dc1-kube-api-access-r7bj8\") pod \"keystone-db-sync-894vg\" (UID: \"09f325c4-37ce-4a2a-910e-817636cc1dc1\") " pod="swift-kuttl-tests/keystone-db-sync-894vg" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.844492 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f325c4-37ce-4a2a-910e-817636cc1dc1-config-data\") pod \"keystone-db-sync-894vg\" (UID: \"09f325c4-37ce-4a2a-910e-817636cc1dc1\") " pod="swift-kuttl-tests/keystone-db-sync-894vg" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.844568 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7bj8\" (UniqueName: \"kubernetes.io/projected/09f325c4-37ce-4a2a-910e-817636cc1dc1-kube-api-access-r7bj8\") pod \"keystone-db-sync-894vg\" (UID: \"09f325c4-37ce-4a2a-910e-817636cc1dc1\") " pod="swift-kuttl-tests/keystone-db-sync-894vg" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.854074 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f325c4-37ce-4a2a-910e-817636cc1dc1-config-data\") pod \"keystone-db-sync-894vg\" (UID: \"09f325c4-37ce-4a2a-910e-817636cc1dc1\") " pod="swift-kuttl-tests/keystone-db-sync-894vg" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.861110 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7bj8\" (UniqueName: \"kubernetes.io/projected/09f325c4-37ce-4a2a-910e-817636cc1dc1-kube-api-access-r7bj8\") pod \"keystone-db-sync-894vg\" (UID: \"09f325c4-37ce-4a2a-910e-817636cc1dc1\") " pod="swift-kuttl-tests/keystone-db-sync-894vg" Jan 10 07:03:35 crc kubenswrapper[4810]: I0110 07:03:35.925728 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-894vg" Jan 10 07:03:36 crc kubenswrapper[4810]: I0110 07:03:36.426125 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-894vg"] Jan 10 07:03:36 crc kubenswrapper[4810]: I0110 07:03:36.894462 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-894vg" event={"ID":"09f325c4-37ce-4a2a-910e-817636cc1dc1","Type":"ContainerStarted","Data":"8510bbe2d64443d6c2fbe8e5ff56a2fd0e6a0d53c6c5c7476dc68f69babab8cc"} Jan 10 07:03:38 crc kubenswrapper[4810]: I0110 07:03:38.874756 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4b549"] Jan 10 07:03:38 crc kubenswrapper[4810]: I0110 07:03:38.875198 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4b549" podUID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerName="registry-server" containerID="cri-o://4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e" gracePeriod=2 Jan 10 07:03:39 crc kubenswrapper[4810]: I0110 07:03:39.918786 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-index-rp747" Jan 10 07:03:39 crc kubenswrapper[4810]: I0110 07:03:39.918844 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/barbican-operator-index-rp747" Jan 10 07:03:39 crc kubenswrapper[4810]: I0110 07:03:39.931413 4810 generic.go:334] "Generic (PLEG): container finished" podID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerID="4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e" exitCode=0 Jan 10 07:03:39 crc kubenswrapper[4810]: I0110 07:03:39.931471 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4b549" event={"ID":"dca32237-a0ff-45fe-b207-7a4ab529a88d","Type":"ContainerDied","Data":"4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e"} Jan 10 07:03:39 crc kubenswrapper[4810]: I0110 07:03:39.949613 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/barbican-operator-index-rp747" Jan 10 07:03:39 crc kubenswrapper[4810]: I0110 07:03:39.988183 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-index-rp747" Jan 10 07:03:43 crc kubenswrapper[4810]: E0110 07:03:43.615088 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e is running failed: container process not found" containerID="4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 07:03:43 crc kubenswrapper[4810]: E0110 07:03:43.616775 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e is running failed: container process not found" containerID="4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 07:03:43 crc kubenswrapper[4810]: E0110 07:03:43.617176 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e is running failed: container process not found" containerID="4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 07:03:43 crc kubenswrapper[4810]: E0110 07:03:43.617282 4810 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-4b549" podUID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerName="registry-server" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.012173 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.110188 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s592\" (UniqueName: \"kubernetes.io/projected/dca32237-a0ff-45fe-b207-7a4ab529a88d-kube-api-access-5s592\") pod \"dca32237-a0ff-45fe-b207-7a4ab529a88d\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.110367 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-catalog-content\") pod \"dca32237-a0ff-45fe-b207-7a4ab529a88d\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.110422 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-utilities\") pod \"dca32237-a0ff-45fe-b207-7a4ab529a88d\" (UID: \"dca32237-a0ff-45fe-b207-7a4ab529a88d\") " Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.111427 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-utilities" (OuterVolumeSpecName: "utilities") pod "dca32237-a0ff-45fe-b207-7a4ab529a88d" (UID: "dca32237-a0ff-45fe-b207-7a4ab529a88d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.115659 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca32237-a0ff-45fe-b207-7a4ab529a88d-kube-api-access-5s592" (OuterVolumeSpecName: "kube-api-access-5s592") pod "dca32237-a0ff-45fe-b207-7a4ab529a88d" (UID: "dca32237-a0ff-45fe-b207-7a4ab529a88d"). InnerVolumeSpecName "kube-api-access-5s592". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.167489 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dca32237-a0ff-45fe-b207-7a4ab529a88d" (UID: "dca32237-a0ff-45fe-b207-7a4ab529a88d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.212592 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.212662 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dca32237-a0ff-45fe-b207-7a4ab529a88d-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.212682 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s592\" (UniqueName: \"kubernetes.io/projected/dca32237-a0ff-45fe-b207-7a4ab529a88d-kube-api-access-5s592\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.949534 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7"] Jan 10 07:03:44 crc kubenswrapper[4810]: E0110 07:03:44.950119 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerName="extract-content" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.950136 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerName="extract-content" Jan 10 07:03:44 crc kubenswrapper[4810]: E0110 07:03:44.950168 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerName="registry-server" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.950177 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerName="registry-server" Jan 10 07:03:44 crc kubenswrapper[4810]: E0110 07:03:44.950209 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerName="extract-utilities" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.950220 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerName="extract-utilities" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.950357 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca32237-a0ff-45fe-b207-7a4ab529a88d" containerName="registry-server" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.951472 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.953825 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-8bf9r" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.966579 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4b549" event={"ID":"dca32237-a0ff-45fe-b207-7a4ab529a88d","Type":"ContainerDied","Data":"f7928a12f9884909871a48c9244c6eb22e01e832337de8fd4a4971f913f1849b"} Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.966628 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4b549" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.966636 4810 scope.go:117] "RemoveContainer" containerID="4a69709a6fa9159b9e29dda3c16093a008123a7b064d76f4c56a55ad84fbbf6e" Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.967313 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7"] Jan 10 07:03:44 crc kubenswrapper[4810]: I0110 07:03:44.976844 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-894vg" event={"ID":"09f325c4-37ce-4a2a-910e-817636cc1dc1","Type":"ContainerStarted","Data":"adda8a31813683e6300962ca84d6875b471352c3d5f2a8e4e0869e65c7146faa"} Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.011726 4810 scope.go:117] "RemoveContainer" containerID="2483a2202462b5c46eb72675a598cbeeefbb7824e8008e6b5480d2d5c8f80545" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.016058 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-db-sync-894vg" podStartSLOduration=2.407574078 podStartE2EDuration="10.016035792s" podCreationTimestamp="2026-01-10 07:03:35 +0000 UTC" firstStartedPulling="2026-01-10 07:03:36.439017274 +0000 UTC m=+1045.054510157" lastFinishedPulling="2026-01-10 07:03:44.047478978 +0000 UTC m=+1052.662971871" observedRunningTime="2026-01-10 07:03:45.012452656 +0000 UTC m=+1053.627945539" watchObservedRunningTime="2026-01-10 07:03:45.016035792 +0000 UTC m=+1053.631528685" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.037736 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4b549"] Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.040074 4810 scope.go:117] "RemoveContainer" containerID="b8eb44792446196bd611185330c9518c41df02895818ab4abb0bd27e02cd7bf1" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.045034 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4b549"] Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.124242 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-bundle\") pod \"784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.124294 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-util\") pod \"784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.124354 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr8fp\" (UniqueName: \"kubernetes.io/projected/8e10035a-26d7-4c1e-932c-e5472e945541-kube-api-access-wr8fp\") pod \"784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.225979 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-bundle\") pod \"784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.226035 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-util\") pod \"784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.226091 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr8fp\" (UniqueName: \"kubernetes.io/projected/8e10035a-26d7-4c1e-932c-e5472e945541-kube-api-access-wr8fp\") pod \"784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.226637 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-util\") pod \"784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.227023 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-bundle\") pod \"784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.251944 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr8fp\" (UniqueName: \"kubernetes.io/projected/8e10035a-26d7-4c1e-932c-e5472e945541-kube-api-access-wr8fp\") pod \"784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.299432 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.704779 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca32237-a0ff-45fe-b207-7a4ab529a88d" path="/var/lib/kubelet/pods/dca32237-a0ff-45fe-b207-7a4ab529a88d/volumes" Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.741830 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7"] Jan 10 07:03:45 crc kubenswrapper[4810]: W0110 07:03:45.748438 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e10035a_26d7_4c1e_932c_e5472e945541.slice/crio-de18c20c9753de9bded35b4a46de80d7f66f666f2c3d154934cbc3c771c5b707 WatchSource:0}: Error finding container de18c20c9753de9bded35b4a46de80d7f66f666f2c3d154934cbc3c771c5b707: Status 404 returned error can't find the container with id de18c20c9753de9bded35b4a46de80d7f66f666f2c3d154934cbc3c771c5b707 Jan 10 07:03:45 crc kubenswrapper[4810]: I0110 07:03:45.984881 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" event={"ID":"8e10035a-26d7-4c1e-932c-e5472e945541","Type":"ContainerStarted","Data":"de18c20c9753de9bded35b4a46de80d7f66f666f2c3d154934cbc3c771c5b707"} Jan 10 07:03:46 crc kubenswrapper[4810]: I0110 07:03:46.997866 4810 generic.go:334] "Generic (PLEG): container finished" podID="8e10035a-26d7-4c1e-932c-e5472e945541" containerID="e3b35dd01b0c205ec9ee0de2dae3c2f168f1e30934e2da5573a21a71af001fd8" exitCode=0 Jan 10 07:03:46 crc kubenswrapper[4810]: I0110 07:03:46.997942 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" event={"ID":"8e10035a-26d7-4c1e-932c-e5472e945541","Type":"ContainerDied","Data":"e3b35dd01b0c205ec9ee0de2dae3c2f168f1e30934e2da5573a21a71af001fd8"} Jan 10 07:03:48 crc kubenswrapper[4810]: I0110 07:03:48.007909 4810 generic.go:334] "Generic (PLEG): container finished" podID="8e10035a-26d7-4c1e-932c-e5472e945541" containerID="0372e377af72e00e0c75e870f1e06442614ed60cf294421362c2efa404038264" exitCode=0 Jan 10 07:03:48 crc kubenswrapper[4810]: I0110 07:03:48.008037 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" event={"ID":"8e10035a-26d7-4c1e-932c-e5472e945541","Type":"ContainerDied","Data":"0372e377af72e00e0c75e870f1e06442614ed60cf294421362c2efa404038264"} Jan 10 07:03:49 crc kubenswrapper[4810]: I0110 07:03:49.018407 4810 generic.go:334] "Generic (PLEG): container finished" podID="8e10035a-26d7-4c1e-932c-e5472e945541" containerID="90b750ac09094a02f95ec66a20e1a6bf1268b975cc2fd848c5fb28a7c017abc0" exitCode=0 Jan 10 07:03:49 crc kubenswrapper[4810]: I0110 07:03:49.018451 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" event={"ID":"8e10035a-26d7-4c1e-932c-e5472e945541","Type":"ContainerDied","Data":"90b750ac09094a02f95ec66a20e1a6bf1268b975cc2fd848c5fb28a7c017abc0"} Jan 10 07:03:49 crc kubenswrapper[4810]: I0110 07:03:49.020647 4810 generic.go:334] "Generic (PLEG): container finished" podID="09f325c4-37ce-4a2a-910e-817636cc1dc1" containerID="adda8a31813683e6300962ca84d6875b471352c3d5f2a8e4e0869e65c7146faa" exitCode=0 Jan 10 07:03:49 crc kubenswrapper[4810]: I0110 07:03:49.020697 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-894vg" event={"ID":"09f325c4-37ce-4a2a-910e-817636cc1dc1","Type":"ContainerDied","Data":"adda8a31813683e6300962ca84d6875b471352c3d5f2a8e4e0869e65c7146faa"} Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.320157 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-894vg" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.369284 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.501759 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7bj8\" (UniqueName: \"kubernetes.io/projected/09f325c4-37ce-4a2a-910e-817636cc1dc1-kube-api-access-r7bj8\") pod \"09f325c4-37ce-4a2a-910e-817636cc1dc1\" (UID: \"09f325c4-37ce-4a2a-910e-817636cc1dc1\") " Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.501840 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-bundle\") pod \"8e10035a-26d7-4c1e-932c-e5472e945541\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.501913 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr8fp\" (UniqueName: \"kubernetes.io/projected/8e10035a-26d7-4c1e-932c-e5472e945541-kube-api-access-wr8fp\") pod \"8e10035a-26d7-4c1e-932c-e5472e945541\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.501965 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f325c4-37ce-4a2a-910e-817636cc1dc1-config-data\") pod \"09f325c4-37ce-4a2a-910e-817636cc1dc1\" (UID: \"09f325c4-37ce-4a2a-910e-817636cc1dc1\") " Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.502010 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-util\") pod \"8e10035a-26d7-4c1e-932c-e5472e945541\" (UID: \"8e10035a-26d7-4c1e-932c-e5472e945541\") " Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.503040 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-bundle" (OuterVolumeSpecName: "bundle") pod "8e10035a-26d7-4c1e-932c-e5472e945541" (UID: "8e10035a-26d7-4c1e-932c-e5472e945541"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.506941 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e10035a-26d7-4c1e-932c-e5472e945541-kube-api-access-wr8fp" (OuterVolumeSpecName: "kube-api-access-wr8fp") pod "8e10035a-26d7-4c1e-932c-e5472e945541" (UID: "8e10035a-26d7-4c1e-932c-e5472e945541"). InnerVolumeSpecName "kube-api-access-wr8fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.507641 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f325c4-37ce-4a2a-910e-817636cc1dc1-kube-api-access-r7bj8" (OuterVolumeSpecName: "kube-api-access-r7bj8") pod "09f325c4-37ce-4a2a-910e-817636cc1dc1" (UID: "09f325c4-37ce-4a2a-910e-817636cc1dc1"). InnerVolumeSpecName "kube-api-access-r7bj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.515238 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-util" (OuterVolumeSpecName: "util") pod "8e10035a-26d7-4c1e-932c-e5472e945541" (UID: "8e10035a-26d7-4c1e-932c-e5472e945541"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.535170 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09f325c4-37ce-4a2a-910e-817636cc1dc1-config-data" (OuterVolumeSpecName: "config-data") pod "09f325c4-37ce-4a2a-910e-817636cc1dc1" (UID: "09f325c4-37ce-4a2a-910e-817636cc1dc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.603942 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr8fp\" (UniqueName: \"kubernetes.io/projected/8e10035a-26d7-4c1e-932c-e5472e945541-kube-api-access-wr8fp\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.603970 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f325c4-37ce-4a2a-910e-817636cc1dc1-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.603980 4810 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-util\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.603990 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7bj8\" (UniqueName: \"kubernetes.io/projected/09f325c4-37ce-4a2a-910e-817636cc1dc1-kube-api-access-r7bj8\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.603999 4810 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e10035a-26d7-4c1e-932c-e5472e945541-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.883022 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.883097 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.883155 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.884032 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20d4ffbd9b363df7f1755c8df4cc04082344a1639cdb27e352856dec031d294d"} pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 10 07:03:50 crc kubenswrapper[4810]: I0110 07:03:50.884132 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" containerID="cri-o://20d4ffbd9b363df7f1755c8df4cc04082344a1639cdb27e352856dec031d294d" gracePeriod=600 Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.041539 4810 generic.go:334] "Generic (PLEG): container finished" podID="b5b79429-9259-412f-bab8-27865ab7029b" containerID="20d4ffbd9b363df7f1755c8df4cc04082344a1639cdb27e352856dec031d294d" exitCode=0 Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.041608 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerDied","Data":"20d4ffbd9b363df7f1755c8df4cc04082344a1639cdb27e352856dec031d294d"} Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.041644 4810 scope.go:117] "RemoveContainer" containerID="15f41bc02b9f771423fbf604f7e443b9696eeab6c3ab3a52722c60140661e2b7" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.044770 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-db-sync-894vg" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.044780 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-db-sync-894vg" event={"ID":"09f325c4-37ce-4a2a-910e-817636cc1dc1","Type":"ContainerDied","Data":"8510bbe2d64443d6c2fbe8e5ff56a2fd0e6a0d53c6c5c7476dc68f69babab8cc"} Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.044815 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8510bbe2d64443d6c2fbe8e5ff56a2fd0e6a0d53c6c5c7476dc68f69babab8cc" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.048921 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" event={"ID":"8e10035a-26d7-4c1e-932c-e5472e945541","Type":"ContainerDied","Data":"de18c20c9753de9bded35b4a46de80d7f66f666f2c3d154934cbc3c771c5b707"} Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.048970 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de18c20c9753de9bded35b4a46de80d7f66f666f2c3d154934cbc3c771c5b707" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.049049 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.285305 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-mmrvb"] Jan 10 07:03:51 crc kubenswrapper[4810]: E0110 07:03:51.285917 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f325c4-37ce-4a2a-910e-817636cc1dc1" containerName="keystone-db-sync" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.285938 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f325c4-37ce-4a2a-910e-817636cc1dc1" containerName="keystone-db-sync" Jan 10 07:03:51 crc kubenswrapper[4810]: E0110 07:03:51.285949 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e10035a-26d7-4c1e-932c-e5472e945541" containerName="util" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.285958 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e10035a-26d7-4c1e-932c-e5472e945541" containerName="util" Jan 10 07:03:51 crc kubenswrapper[4810]: E0110 07:03:51.285999 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e10035a-26d7-4c1e-932c-e5472e945541" containerName="pull" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.286008 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e10035a-26d7-4c1e-932c-e5472e945541" containerName="pull" Jan 10 07:03:51 crc kubenswrapper[4810]: E0110 07:03:51.286020 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e10035a-26d7-4c1e-932c-e5472e945541" containerName="extract" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.286029 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e10035a-26d7-4c1e-932c-e5472e945541" containerName="extract" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.286160 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f325c4-37ce-4a2a-910e-817636cc1dc1" containerName="keystone-db-sync" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.286185 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e10035a-26d7-4c1e-932c-e5472e945541" containerName="extract" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.286758 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.289422 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.289586 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.289748 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"osp-secret" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.290687 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.292564 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-vlnjr" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.299923 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-mmrvb"] Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.315332 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-credential-keys\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.315371 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-config-data\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.315427 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-fernet-keys\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.315465 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v84j9\" (UniqueName: \"kubernetes.io/projected/87eafc83-9ef5-4a79-8f79-e6386ba86071-kube-api-access-v84j9\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.315482 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-scripts\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.416382 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v84j9\" (UniqueName: \"kubernetes.io/projected/87eafc83-9ef5-4a79-8f79-e6386ba86071-kube-api-access-v84j9\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.416422 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-scripts\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.416457 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-credential-keys\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.416514 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-config-data\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.417170 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-fernet-keys\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.420585 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-credential-keys\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.421260 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-config-data\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.422279 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-scripts\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.423471 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-fernet-keys\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.434935 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v84j9\" (UniqueName: \"kubernetes.io/projected/87eafc83-9ef5-4a79-8f79-e6386ba86071-kube-api-access-v84j9\") pod \"keystone-bootstrap-mmrvb\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:51 crc kubenswrapper[4810]: I0110 07:03:51.610829 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:52 crc kubenswrapper[4810]: I0110 07:03:52.059664 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"ef69a96aed65e9ebe56b671768834cb4b9361c0d3969bd6b62fbd2f9b5387011"} Jan 10 07:03:52 crc kubenswrapper[4810]: I0110 07:03:52.135049 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-mmrvb"] Jan 10 07:03:52 crc kubenswrapper[4810]: W0110 07:03:52.139826 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eafc83_9ef5_4a79_8f79_e6386ba86071.slice/crio-30de562d2810560bbca982155da352e370fafba48a4e93a4e755bf1a63a19884 WatchSource:0}: Error finding container 30de562d2810560bbca982155da352e370fafba48a4e93a4e755bf1a63a19884: Status 404 returned error can't find the container with id 30de562d2810560bbca982155da352e370fafba48a4e93a4e755bf1a63a19884 Jan 10 07:03:53 crc kubenswrapper[4810]: I0110 07:03:53.072672 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" event={"ID":"87eafc83-9ef5-4a79-8f79-e6386ba86071","Type":"ContainerStarted","Data":"363175a0a8f2b31f8c3f22a0e7e5bf0dc9d319a75d067f36e806b5529389d136"} Jan 10 07:03:53 crc kubenswrapper[4810]: I0110 07:03:53.072943 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" event={"ID":"87eafc83-9ef5-4a79-8f79-e6386ba86071","Type":"ContainerStarted","Data":"30de562d2810560bbca982155da352e370fafba48a4e93a4e755bf1a63a19884"} Jan 10 07:03:53 crc kubenswrapper[4810]: I0110 07:03:53.094355 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" podStartSLOduration=2.094338342 podStartE2EDuration="2.094338342s" podCreationTimestamp="2026-01-10 07:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:03:53.09091441 +0000 UTC m=+1061.706407333" watchObservedRunningTime="2026-01-10 07:03:53.094338342 +0000 UTC m=+1061.709831225" Jan 10 07:03:56 crc kubenswrapper[4810]: I0110 07:03:56.099244 4810 generic.go:334] "Generic (PLEG): container finished" podID="87eafc83-9ef5-4a79-8f79-e6386ba86071" containerID="363175a0a8f2b31f8c3f22a0e7e5bf0dc9d319a75d067f36e806b5529389d136" exitCode=0 Jan 10 07:03:56 crc kubenswrapper[4810]: I0110 07:03:56.099318 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" event={"ID":"87eafc83-9ef5-4a79-8f79-e6386ba86071","Type":"ContainerDied","Data":"363175a0a8f2b31f8c3f22a0e7e5bf0dc9d319a75d067f36e806b5529389d136"} Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.564680 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.610724 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-scripts\") pod \"87eafc83-9ef5-4a79-8f79-e6386ba86071\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.610792 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-config-data\") pod \"87eafc83-9ef5-4a79-8f79-e6386ba86071\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.611101 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-credential-keys\") pod \"87eafc83-9ef5-4a79-8f79-e6386ba86071\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.611156 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-fernet-keys\") pod \"87eafc83-9ef5-4a79-8f79-e6386ba86071\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.611263 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v84j9\" (UniqueName: \"kubernetes.io/projected/87eafc83-9ef5-4a79-8f79-e6386ba86071-kube-api-access-v84j9\") pod \"87eafc83-9ef5-4a79-8f79-e6386ba86071\" (UID: \"87eafc83-9ef5-4a79-8f79-e6386ba86071\") " Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.616706 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "87eafc83-9ef5-4a79-8f79-e6386ba86071" (UID: "87eafc83-9ef5-4a79-8f79-e6386ba86071"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.618155 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87eafc83-9ef5-4a79-8f79-e6386ba86071-kube-api-access-v84j9" (OuterVolumeSpecName: "kube-api-access-v84j9") pod "87eafc83-9ef5-4a79-8f79-e6386ba86071" (UID: "87eafc83-9ef5-4a79-8f79-e6386ba86071"). InnerVolumeSpecName "kube-api-access-v84j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.629336 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "87eafc83-9ef5-4a79-8f79-e6386ba86071" (UID: "87eafc83-9ef5-4a79-8f79-e6386ba86071"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.632345 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-scripts" (OuterVolumeSpecName: "scripts") pod "87eafc83-9ef5-4a79-8f79-e6386ba86071" (UID: "87eafc83-9ef5-4a79-8f79-e6386ba86071"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.648510 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-config-data" (OuterVolumeSpecName: "config-data") pod "87eafc83-9ef5-4a79-8f79-e6386ba86071" (UID: "87eafc83-9ef5-4a79-8f79-e6386ba86071"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.712980 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.713006 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.713015 4810 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.713034 4810 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87eafc83-9ef5-4a79-8f79-e6386ba86071-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:57 crc kubenswrapper[4810]: I0110 07:03:57.713045 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v84j9\" (UniqueName: \"kubernetes.io/projected/87eafc83-9ef5-4a79-8f79-e6386ba86071-kube-api-access-v84j9\") on node \"crc\" DevicePath \"\"" Jan 10 07:03:57 crc kubenswrapper[4810]: E0110 07:03:57.775843 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87eafc83_9ef5_4a79_8f79_e6386ba86071.slice\": RecentStats: unable to find data in memory cache]" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.116460 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" event={"ID":"87eafc83-9ef5-4a79-8f79-e6386ba86071","Type":"ContainerDied","Data":"30de562d2810560bbca982155da352e370fafba48a4e93a4e755bf1a63a19884"} Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.116499 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30de562d2810560bbca982155da352e370fafba48a4e93a4e755bf1a63a19884" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.116549 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-bootstrap-mmrvb" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.244473 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystone-5466b78d84-wrtxv"] Jan 10 07:03:58 crc kubenswrapper[4810]: E0110 07:03:58.245023 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87eafc83-9ef5-4a79-8f79-e6386ba86071" containerName="keystone-bootstrap" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.245050 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="87eafc83-9ef5-4a79-8f79-e6386ba86071" containerName="keystone-bootstrap" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.245347 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="87eafc83-9ef5-4a79-8f79-e6386ba86071" containerName="keystone-bootstrap" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.246053 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.251213 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.251568 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-scripts" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.251907 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-keystone-dockercfg-vlnjr" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.257494 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"keystone-config-data" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.266423 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-5466b78d84-wrtxv"] Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.320552 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-credential-keys\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.320995 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-fernet-keys\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.321350 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-config-data\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.321579 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9bk\" (UniqueName: \"kubernetes.io/projected/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-kube-api-access-nw9bk\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.322042 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-scripts\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.423091 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-credential-keys\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.423740 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-fernet-keys\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.423778 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-config-data\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.423806 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9bk\" (UniqueName: \"kubernetes.io/projected/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-kube-api-access-nw9bk\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.423834 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-scripts\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.427772 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-fernet-keys\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.428287 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-scripts\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.428333 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-credential-keys\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.438504 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-config-data\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.459130 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9bk\" (UniqueName: \"kubernetes.io/projected/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-kube-api-access-nw9bk\") pod \"keystone-5466b78d84-wrtxv\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:58 crc kubenswrapper[4810]: I0110 07:03:58.574611 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:03:59 crc kubenswrapper[4810]: I0110 07:03:59.020128 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystone-5466b78d84-wrtxv"] Jan 10 07:03:59 crc kubenswrapper[4810]: I0110 07:03:59.123018 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" event={"ID":"01b7ca68-697a-4b7a-8c56-73b4a6fc7283","Type":"ContainerStarted","Data":"05d8333240c2ab7ba456034c69ed31a99423aabbbb1f3f7a80f4e4c90a70128d"} Jan 10 07:04:02 crc kubenswrapper[4810]: I0110 07:04:02.141906 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" event={"ID":"01b7ca68-697a-4b7a-8c56-73b4a6fc7283","Type":"ContainerStarted","Data":"98290f3347a44234bcf9800c5d3cc6af1020f0d56a2a4922cc3110b02f8ca929"} Jan 10 07:04:02 crc kubenswrapper[4810]: I0110 07:04:02.142600 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:04:02 crc kubenswrapper[4810]: I0110 07:04:02.162181 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" podStartSLOduration=4.162162218 podStartE2EDuration="4.162162218s" podCreationTimestamp="2026-01-10 07:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:04:02.159974536 +0000 UTC m=+1070.775467419" watchObservedRunningTime="2026-01-10 07:04:02.162162218 +0000 UTC m=+1070.777655111" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.200718 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg"] Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.201914 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.204356 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-service-cert" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.204516 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-2sr25" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.231565 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg"] Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.308154 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-webhook-cert\") pod \"barbican-operator-controller-manager-8bf56dbd-zkzlg\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.308289 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nhwm\" (UniqueName: \"kubernetes.io/projected/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-kube-api-access-4nhwm\") pod \"barbican-operator-controller-manager-8bf56dbd-zkzlg\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.308341 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-apiservice-cert\") pod \"barbican-operator-controller-manager-8bf56dbd-zkzlg\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.409365 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nhwm\" (UniqueName: \"kubernetes.io/projected/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-kube-api-access-4nhwm\") pod \"barbican-operator-controller-manager-8bf56dbd-zkzlg\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.409441 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-apiservice-cert\") pod \"barbican-operator-controller-manager-8bf56dbd-zkzlg\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.409533 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-webhook-cert\") pod \"barbican-operator-controller-manager-8bf56dbd-zkzlg\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.422046 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-webhook-cert\") pod \"barbican-operator-controller-manager-8bf56dbd-zkzlg\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.422090 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-apiservice-cert\") pod \"barbican-operator-controller-manager-8bf56dbd-zkzlg\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.425988 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nhwm\" (UniqueName: \"kubernetes.io/projected/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-kube-api-access-4nhwm\") pod \"barbican-operator-controller-manager-8bf56dbd-zkzlg\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.516801 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:10 crc kubenswrapper[4810]: I0110 07:04:10.963836 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg"] Jan 10 07:04:10 crc kubenswrapper[4810]: W0110 07:04:10.969760 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e7172d7_ebe7_4a16_878f_a82b25cdecbb.slice/crio-e9b0af997325e095e38d52f4fb6463a209d92bd503cdf6ec92550a777ff15e03 WatchSource:0}: Error finding container e9b0af997325e095e38d52f4fb6463a209d92bd503cdf6ec92550a777ff15e03: Status 404 returned error can't find the container with id e9b0af997325e095e38d52f4fb6463a209d92bd503cdf6ec92550a777ff15e03 Jan 10 07:04:11 crc kubenswrapper[4810]: I0110 07:04:11.207338 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" event={"ID":"9e7172d7-ebe7-4a16-878f-a82b25cdecbb","Type":"ContainerStarted","Data":"e9b0af997325e095e38d52f4fb6463a209d92bd503cdf6ec92550a777ff15e03"} Jan 10 07:04:14 crc kubenswrapper[4810]: I0110 07:04:14.234258 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" event={"ID":"9e7172d7-ebe7-4a16-878f-a82b25cdecbb","Type":"ContainerStarted","Data":"9189bec18c282ea248daa8eb931becd8a29f3c4d69d54f70c935c8e586a42233"} Jan 10 07:04:14 crc kubenswrapper[4810]: I0110 07:04:14.234901 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:14 crc kubenswrapper[4810]: I0110 07:04:14.259367 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" podStartSLOduration=1.190623971 podStartE2EDuration="4.259345758s" podCreationTimestamp="2026-01-10 07:04:10 +0000 UTC" firstStartedPulling="2026-01-10 07:04:10.971898912 +0000 UTC m=+1079.587391805" lastFinishedPulling="2026-01-10 07:04:14.040620709 +0000 UTC m=+1082.656113592" observedRunningTime="2026-01-10 07:04:14.256544431 +0000 UTC m=+1082.872037334" watchObservedRunningTime="2026-01-10 07:04:14.259345758 +0000 UTC m=+1082.874838651" Jan 10 07:04:20 crc kubenswrapper[4810]: I0110 07:04:20.523407 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:04:28 crc kubenswrapper[4810]: I0110 07:04:28.961126 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-create-cpptw"] Jan 10 07:04:28 crc kubenswrapper[4810]: I0110 07:04:28.962993 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-cpptw" Jan 10 07:04:28 crc kubenswrapper[4810]: I0110 07:04:28.972140 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-cpptw"] Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.077536 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-2aa3-account-create-update-krblm"] Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.078477 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.081054 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-db-secret" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.089321 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-2aa3-account-create-update-krblm"] Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.098263 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxq6w\" (UniqueName: \"kubernetes.io/projected/b6c92036-8ef9-4631-85c1-a11bc9f9829b-kube-api-access-jxq6w\") pod \"barbican-db-create-cpptw\" (UID: \"b6c92036-8ef9-4631-85c1-a11bc9f9829b\") " pod="swift-kuttl-tests/barbican-db-create-cpptw" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.098338 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c92036-8ef9-4631-85c1-a11bc9f9829b-operator-scripts\") pod \"barbican-db-create-cpptw\" (UID: \"b6c92036-8ef9-4631-85c1-a11bc9f9829b\") " pod="swift-kuttl-tests/barbican-db-create-cpptw" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.199548 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c92036-8ef9-4631-85c1-a11bc9f9829b-operator-scripts\") pod \"barbican-db-create-cpptw\" (UID: \"b6c92036-8ef9-4631-85c1-a11bc9f9829b\") " pod="swift-kuttl-tests/barbican-db-create-cpptw" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.199624 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d200a34b-dbff-467c-aa3f-8fed2875175c-operator-scripts\") pod \"barbican-2aa3-account-create-update-krblm\" (UID: \"d200a34b-dbff-467c-aa3f-8fed2875175c\") " pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.199666 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kdsr\" (UniqueName: \"kubernetes.io/projected/d200a34b-dbff-467c-aa3f-8fed2875175c-kube-api-access-4kdsr\") pod \"barbican-2aa3-account-create-update-krblm\" (UID: \"d200a34b-dbff-467c-aa3f-8fed2875175c\") " pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.199740 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxq6w\" (UniqueName: \"kubernetes.io/projected/b6c92036-8ef9-4631-85c1-a11bc9f9829b-kube-api-access-jxq6w\") pod \"barbican-db-create-cpptw\" (UID: \"b6c92036-8ef9-4631-85c1-a11bc9f9829b\") " pod="swift-kuttl-tests/barbican-db-create-cpptw" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.200333 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c92036-8ef9-4631-85c1-a11bc9f9829b-operator-scripts\") pod \"barbican-db-create-cpptw\" (UID: \"b6c92036-8ef9-4631-85c1-a11bc9f9829b\") " pod="swift-kuttl-tests/barbican-db-create-cpptw" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.225523 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxq6w\" (UniqueName: \"kubernetes.io/projected/b6c92036-8ef9-4631-85c1-a11bc9f9829b-kube-api-access-jxq6w\") pod \"barbican-db-create-cpptw\" (UID: \"b6c92036-8ef9-4631-85c1-a11bc9f9829b\") " pod="swift-kuttl-tests/barbican-db-create-cpptw" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.280592 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-cpptw" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.301148 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kdsr\" (UniqueName: \"kubernetes.io/projected/d200a34b-dbff-467c-aa3f-8fed2875175c-kube-api-access-4kdsr\") pod \"barbican-2aa3-account-create-update-krblm\" (UID: \"d200a34b-dbff-467c-aa3f-8fed2875175c\") " pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.301290 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d200a34b-dbff-467c-aa3f-8fed2875175c-operator-scripts\") pod \"barbican-2aa3-account-create-update-krblm\" (UID: \"d200a34b-dbff-467c-aa3f-8fed2875175c\") " pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.301955 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d200a34b-dbff-467c-aa3f-8fed2875175c-operator-scripts\") pod \"barbican-2aa3-account-create-update-krblm\" (UID: \"d200a34b-dbff-467c-aa3f-8fed2875175c\") " pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.329037 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kdsr\" (UniqueName: \"kubernetes.io/projected/d200a34b-dbff-467c-aa3f-8fed2875175c-kube-api-access-4kdsr\") pod \"barbican-2aa3-account-create-update-krblm\" (UID: \"d200a34b-dbff-467c-aa3f-8fed2875175c\") " pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.391297 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.710656 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-2aa3-account-create-update-krblm"] Jan 10 07:04:29 crc kubenswrapper[4810]: W0110 07:04:29.711066 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd200a34b_dbff_467c_aa3f_8fed2875175c.slice/crio-9edce4ad163ec146e9d2bf773bd7ce9b8a982676dcdef91a654a7a97bd4555e2 WatchSource:0}: Error finding container 9edce4ad163ec146e9d2bf773bd7ce9b8a982676dcdef91a654a7a97bd4555e2: Status 404 returned error can't find the container with id 9edce4ad163ec146e9d2bf773bd7ce9b8a982676dcdef91a654a7a97bd4555e2 Jan 10 07:04:29 crc kubenswrapper[4810]: I0110 07:04:29.837235 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-create-cpptw"] Jan 10 07:04:29 crc kubenswrapper[4810]: W0110 07:04:29.846795 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c92036_8ef9_4631_85c1_a11bc9f9829b.slice/crio-57e63f5bdb63a87cf9edef2760f4e1fd04ac2f10fdfdeac596ddda62ae2f1e44 WatchSource:0}: Error finding container 57e63f5bdb63a87cf9edef2760f4e1fd04ac2f10fdfdeac596ddda62ae2f1e44: Status 404 returned error can't find the container with id 57e63f5bdb63a87cf9edef2760f4e1fd04ac2f10fdfdeac596ddda62ae2f1e44 Jan 10 07:04:30 crc kubenswrapper[4810]: I0110 07:04:30.013335 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:04:30 crc kubenswrapper[4810]: I0110 07:04:30.366856 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" event={"ID":"d200a34b-dbff-467c-aa3f-8fed2875175c","Type":"ContainerStarted","Data":"9edce4ad163ec146e9d2bf773bd7ce9b8a982676dcdef91a654a7a97bd4555e2"} Jan 10 07:04:30 crc kubenswrapper[4810]: I0110 07:04:30.368675 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-cpptw" event={"ID":"b6c92036-8ef9-4631-85c1-a11bc9f9829b","Type":"ContainerStarted","Data":"57e63f5bdb63a87cf9edef2760f4e1fd04ac2f10fdfdeac596ddda62ae2f1e44"} Jan 10 07:04:31 crc kubenswrapper[4810]: I0110 07:04:31.384020 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" event={"ID":"d200a34b-dbff-467c-aa3f-8fed2875175c","Type":"ContainerStarted","Data":"ad71bf32f3aeaf27b90f3311bd0d136e37d8220d97b44b8e79479512d36013f5"} Jan 10 07:04:31 crc kubenswrapper[4810]: I0110 07:04:31.386781 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-cpptw" event={"ID":"b6c92036-8ef9-4631-85c1-a11bc9f9829b","Type":"ContainerStarted","Data":"52f3902518e37fcfb66f76ac8e0b74a03c8abf92c64355e50ff92a6177b5f796"} Jan 10 07:04:31 crc kubenswrapper[4810]: I0110 07:04:31.413483 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" podStartSLOduration=2.413457586 podStartE2EDuration="2.413457586s" podCreationTimestamp="2026-01-10 07:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:04:31.410652699 +0000 UTC m=+1100.026145612" watchObservedRunningTime="2026-01-10 07:04:31.413457586 +0000 UTC m=+1100.028950509" Jan 10 07:04:31 crc kubenswrapper[4810]: I0110 07:04:31.890823 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-vn7dw"] Jan 10 07:04:31 crc kubenswrapper[4810]: I0110 07:04:31.893275 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-vn7dw" Jan 10 07:04:31 crc kubenswrapper[4810]: I0110 07:04:31.895864 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-rsps2" Jan 10 07:04:31 crc kubenswrapper[4810]: I0110 07:04:31.900297 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-vn7dw"] Jan 10 07:04:32 crc kubenswrapper[4810]: I0110 07:04:32.055209 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dphnr\" (UniqueName: \"kubernetes.io/projected/0b3bfd13-4157-4289-981b-55602232df01-kube-api-access-dphnr\") pod \"swift-operator-index-vn7dw\" (UID: \"0b3bfd13-4157-4289-981b-55602232df01\") " pod="openstack-operators/swift-operator-index-vn7dw" Jan 10 07:04:32 crc kubenswrapper[4810]: I0110 07:04:32.157207 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dphnr\" (UniqueName: \"kubernetes.io/projected/0b3bfd13-4157-4289-981b-55602232df01-kube-api-access-dphnr\") pod \"swift-operator-index-vn7dw\" (UID: \"0b3bfd13-4157-4289-981b-55602232df01\") " pod="openstack-operators/swift-operator-index-vn7dw" Jan 10 07:04:32 crc kubenswrapper[4810]: I0110 07:04:32.181699 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dphnr\" (UniqueName: \"kubernetes.io/projected/0b3bfd13-4157-4289-981b-55602232df01-kube-api-access-dphnr\") pod \"swift-operator-index-vn7dw\" (UID: \"0b3bfd13-4157-4289-981b-55602232df01\") " pod="openstack-operators/swift-operator-index-vn7dw" Jan 10 07:04:32 crc kubenswrapper[4810]: I0110 07:04:32.305728 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-vn7dw" Jan 10 07:04:32 crc kubenswrapper[4810]: I0110 07:04:32.398219 4810 generic.go:334] "Generic (PLEG): container finished" podID="d200a34b-dbff-467c-aa3f-8fed2875175c" containerID="ad71bf32f3aeaf27b90f3311bd0d136e37d8220d97b44b8e79479512d36013f5" exitCode=0 Jan 10 07:04:32 crc kubenswrapper[4810]: I0110 07:04:32.398370 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" event={"ID":"d200a34b-dbff-467c-aa3f-8fed2875175c","Type":"ContainerDied","Data":"ad71bf32f3aeaf27b90f3311bd0d136e37d8220d97b44b8e79479512d36013f5"} Jan 10 07:04:32 crc kubenswrapper[4810]: I0110 07:04:32.401904 4810 generic.go:334] "Generic (PLEG): container finished" podID="b6c92036-8ef9-4631-85c1-a11bc9f9829b" containerID="52f3902518e37fcfb66f76ac8e0b74a03c8abf92c64355e50ff92a6177b5f796" exitCode=0 Jan 10 07:04:32 crc kubenswrapper[4810]: I0110 07:04:32.402030 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-cpptw" event={"ID":"b6c92036-8ef9-4631-85c1-a11bc9f9829b","Type":"ContainerDied","Data":"52f3902518e37fcfb66f76ac8e0b74a03c8abf92c64355e50ff92a6177b5f796"} Jan 10 07:04:32 crc kubenswrapper[4810]: I0110 07:04:32.799888 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-vn7dw"] Jan 10 07:04:32 crc kubenswrapper[4810]: W0110 07:04:32.802387 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b3bfd13_4157_4289_981b_55602232df01.slice/crio-ad85aa4c5d5c7b3a735e4dd8c82a81cdc8dc53e7d5e661bfaadbb838028a31a0 WatchSource:0}: Error finding container ad85aa4c5d5c7b3a735e4dd8c82a81cdc8dc53e7d5e661bfaadbb838028a31a0: Status 404 returned error can't find the container with id ad85aa4c5d5c7b3a735e4dd8c82a81cdc8dc53e7d5e661bfaadbb838028a31a0 Jan 10 07:04:33 crc kubenswrapper[4810]: I0110 07:04:33.414531 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-vn7dw" event={"ID":"0b3bfd13-4157-4289-981b-55602232df01","Type":"ContainerStarted","Data":"ad85aa4c5d5c7b3a735e4dd8c82a81cdc8dc53e7d5e661bfaadbb838028a31a0"} Jan 10 07:04:33 crc kubenswrapper[4810]: I0110 07:04:33.836727 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" Jan 10 07:04:33 crc kubenswrapper[4810]: I0110 07:04:33.911454 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-cpptw" Jan 10 07:04:33 crc kubenswrapper[4810]: I0110 07:04:33.983401 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d200a34b-dbff-467c-aa3f-8fed2875175c-operator-scripts\") pod \"d200a34b-dbff-467c-aa3f-8fed2875175c\" (UID: \"d200a34b-dbff-467c-aa3f-8fed2875175c\") " Jan 10 07:04:33 crc kubenswrapper[4810]: I0110 07:04:33.983485 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kdsr\" (UniqueName: \"kubernetes.io/projected/d200a34b-dbff-467c-aa3f-8fed2875175c-kube-api-access-4kdsr\") pod \"d200a34b-dbff-467c-aa3f-8fed2875175c\" (UID: \"d200a34b-dbff-467c-aa3f-8fed2875175c\") " Jan 10 07:04:33 crc kubenswrapper[4810]: I0110 07:04:33.985052 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d200a34b-dbff-467c-aa3f-8fed2875175c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d200a34b-dbff-467c-aa3f-8fed2875175c" (UID: "d200a34b-dbff-467c-aa3f-8fed2875175c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:04:33 crc kubenswrapper[4810]: I0110 07:04:33.989372 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d200a34b-dbff-467c-aa3f-8fed2875175c-kube-api-access-4kdsr" (OuterVolumeSpecName: "kube-api-access-4kdsr") pod "d200a34b-dbff-467c-aa3f-8fed2875175c" (UID: "d200a34b-dbff-467c-aa3f-8fed2875175c"). InnerVolumeSpecName "kube-api-access-4kdsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.084771 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c92036-8ef9-4631-85c1-a11bc9f9829b-operator-scripts\") pod \"b6c92036-8ef9-4631-85c1-a11bc9f9829b\" (UID: \"b6c92036-8ef9-4631-85c1-a11bc9f9829b\") " Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.084929 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxq6w\" (UniqueName: \"kubernetes.io/projected/b6c92036-8ef9-4631-85c1-a11bc9f9829b-kube-api-access-jxq6w\") pod \"b6c92036-8ef9-4631-85c1-a11bc9f9829b\" (UID: \"b6c92036-8ef9-4631-85c1-a11bc9f9829b\") " Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.085223 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d200a34b-dbff-467c-aa3f-8fed2875175c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.085239 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kdsr\" (UniqueName: \"kubernetes.io/projected/d200a34b-dbff-467c-aa3f-8fed2875175c-kube-api-access-4kdsr\") on node \"crc\" DevicePath \"\"" Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.085252 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c92036-8ef9-4631-85c1-a11bc9f9829b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6c92036-8ef9-4631-85c1-a11bc9f9829b" (UID: "b6c92036-8ef9-4631-85c1-a11bc9f9829b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.089333 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c92036-8ef9-4631-85c1-a11bc9f9829b-kube-api-access-jxq6w" (OuterVolumeSpecName: "kube-api-access-jxq6w") pod "b6c92036-8ef9-4631-85c1-a11bc9f9829b" (UID: "b6c92036-8ef9-4631-85c1-a11bc9f9829b"). InnerVolumeSpecName "kube-api-access-jxq6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.186584 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxq6w\" (UniqueName: \"kubernetes.io/projected/b6c92036-8ef9-4631-85c1-a11bc9f9829b-kube-api-access-jxq6w\") on node \"crc\" DevicePath \"\"" Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.186621 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6c92036-8ef9-4631-85c1-a11bc9f9829b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.421863 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" event={"ID":"d200a34b-dbff-467c-aa3f-8fed2875175c","Type":"ContainerDied","Data":"9edce4ad163ec146e9d2bf773bd7ce9b8a982676dcdef91a654a7a97bd4555e2"} Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.422278 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9edce4ad163ec146e9d2bf773bd7ce9b8a982676dcdef91a654a7a97bd4555e2" Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.421872 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-2aa3-account-create-update-krblm" Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.440011 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-create-cpptw" event={"ID":"b6c92036-8ef9-4631-85c1-a11bc9f9829b","Type":"ContainerDied","Data":"57e63f5bdb63a87cf9edef2760f4e1fd04ac2f10fdfdeac596ddda62ae2f1e44"} Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.440056 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57e63f5bdb63a87cf9edef2760f4e1fd04ac2f10fdfdeac596ddda62ae2f1e44" Jan 10 07:04:34 crc kubenswrapper[4810]: I0110 07:04:34.440529 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-create-cpptw" Jan 10 07:04:35 crc kubenswrapper[4810]: I0110 07:04:35.454491 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-vn7dw" event={"ID":"0b3bfd13-4157-4289-981b-55602232df01","Type":"ContainerStarted","Data":"2e3dac380ab3a2450bb7290a406204ebc235725a2cd74069ce8f82180493914e"} Jan 10 07:04:35 crc kubenswrapper[4810]: I0110 07:04:35.478475 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-vn7dw" podStartSLOduration=2.697937176 podStartE2EDuration="4.478451141s" podCreationTimestamp="2026-01-10 07:04:31 +0000 UTC" firstStartedPulling="2026-01-10 07:04:32.80486108 +0000 UTC m=+1101.420354003" lastFinishedPulling="2026-01-10 07:04:34.585375075 +0000 UTC m=+1103.200867968" observedRunningTime="2026-01-10 07:04:35.475757616 +0000 UTC m=+1104.091250549" watchObservedRunningTime="2026-01-10 07:04:35.478451141 +0000 UTC m=+1104.093944064" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.515998 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-db-sync-l684q"] Jan 10 07:04:39 crc kubenswrapper[4810]: E0110 07:04:39.516806 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c92036-8ef9-4631-85c1-a11bc9f9829b" containerName="mariadb-database-create" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.516823 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c92036-8ef9-4631-85c1-a11bc9f9829b" containerName="mariadb-database-create" Jan 10 07:04:39 crc kubenswrapper[4810]: E0110 07:04:39.516837 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d200a34b-dbff-467c-aa3f-8fed2875175c" containerName="mariadb-account-create-update" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.516846 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d200a34b-dbff-467c-aa3f-8fed2875175c" containerName="mariadb-account-create-update" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.516978 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d200a34b-dbff-467c-aa3f-8fed2875175c" containerName="mariadb-account-create-update" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.516999 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c92036-8ef9-4631-85c1-a11bc9f9829b" containerName="mariadb-database-create" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.517569 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-l684q" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.519882 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.530683 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-zfq4x" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.531255 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-l684q"] Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.677614 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpjxb\" (UniqueName: \"kubernetes.io/projected/dd9db040-7192-4fdb-951f-07e206f32728-kube-api-access-wpjxb\") pod \"barbican-db-sync-l684q\" (UID: \"dd9db040-7192-4fdb-951f-07e206f32728\") " pod="swift-kuttl-tests/barbican-db-sync-l684q" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.677727 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd9db040-7192-4fdb-951f-07e206f32728-db-sync-config-data\") pod \"barbican-db-sync-l684q\" (UID: \"dd9db040-7192-4fdb-951f-07e206f32728\") " pod="swift-kuttl-tests/barbican-db-sync-l684q" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.779441 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpjxb\" (UniqueName: \"kubernetes.io/projected/dd9db040-7192-4fdb-951f-07e206f32728-kube-api-access-wpjxb\") pod \"barbican-db-sync-l684q\" (UID: \"dd9db040-7192-4fdb-951f-07e206f32728\") " pod="swift-kuttl-tests/barbican-db-sync-l684q" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.779521 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd9db040-7192-4fdb-951f-07e206f32728-db-sync-config-data\") pod \"barbican-db-sync-l684q\" (UID: \"dd9db040-7192-4fdb-951f-07e206f32728\") " pod="swift-kuttl-tests/barbican-db-sync-l684q" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.787947 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd9db040-7192-4fdb-951f-07e206f32728-db-sync-config-data\") pod \"barbican-db-sync-l684q\" (UID: \"dd9db040-7192-4fdb-951f-07e206f32728\") " pod="swift-kuttl-tests/barbican-db-sync-l684q" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.822121 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpjxb\" (UniqueName: \"kubernetes.io/projected/dd9db040-7192-4fdb-951f-07e206f32728-kube-api-access-wpjxb\") pod \"barbican-db-sync-l684q\" (UID: \"dd9db040-7192-4fdb-951f-07e206f32728\") " pod="swift-kuttl-tests/barbican-db-sync-l684q" Jan 10 07:04:39 crc kubenswrapper[4810]: I0110 07:04:39.837571 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-l684q" Jan 10 07:04:40 crc kubenswrapper[4810]: I0110 07:04:40.331453 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-l684q"] Jan 10 07:04:40 crc kubenswrapper[4810]: I0110 07:04:40.519636 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-l684q" event={"ID":"dd9db040-7192-4fdb-951f-07e206f32728","Type":"ContainerStarted","Data":"37c9af2f0b5d2b792810d05e2f372e1be0d365feb31f3d21c68d072b46b161bd"} Jan 10 07:04:42 crc kubenswrapper[4810]: I0110 07:04:42.306024 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-vn7dw" Jan 10 07:04:42 crc kubenswrapper[4810]: I0110 07:04:42.306359 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-vn7dw" Jan 10 07:04:42 crc kubenswrapper[4810]: I0110 07:04:42.337083 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-vn7dw" Jan 10 07:04:42 crc kubenswrapper[4810]: I0110 07:04:42.570987 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-vn7dw" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.540323 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr"] Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.542445 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.545246 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-8bf9r" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.554211 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr"] Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.559036 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-l684q" event={"ID":"dd9db040-7192-4fdb-951f-07e206f32728","Type":"ContainerStarted","Data":"5a0c21d6bf258ac463f0ecd86f456a01ed37a6415e99ee7aab1e88d2796c087f"} Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.611146 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-db-sync-l684q" podStartSLOduration=2.098498556 podStartE2EDuration="6.611131516s" podCreationTimestamp="2026-01-10 07:04:39 +0000 UTC" firstStartedPulling="2026-01-10 07:04:40.329856689 +0000 UTC m=+1108.945349592" lastFinishedPulling="2026-01-10 07:04:44.842489659 +0000 UTC m=+1113.457982552" observedRunningTime="2026-01-10 07:04:45.606240478 +0000 UTC m=+1114.221733391" watchObservedRunningTime="2026-01-10 07:04:45.611131516 +0000 UTC m=+1114.226624399" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.708392 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-bundle\") pod \"1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.708579 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-util\") pod \"1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.708679 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj6fg\" (UniqueName: \"kubernetes.io/projected/58f3a749-f75e-4359-975c-cde9d6cca018-kube-api-access-vj6fg\") pod \"1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.809878 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj6fg\" (UniqueName: \"kubernetes.io/projected/58f3a749-f75e-4359-975c-cde9d6cca018-kube-api-access-vj6fg\") pod \"1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.810131 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-bundle\") pod \"1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.810274 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-util\") pod \"1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.811107 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-util\") pod \"1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.811269 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-bundle\") pod \"1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.850031 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj6fg\" (UniqueName: \"kubernetes.io/projected/58f3a749-f75e-4359-975c-cde9d6cca018-kube-api-access-vj6fg\") pod \"1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:45 crc kubenswrapper[4810]: I0110 07:04:45.877012 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:46 crc kubenswrapper[4810]: I0110 07:04:46.155255 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr"] Jan 10 07:04:46 crc kubenswrapper[4810]: I0110 07:04:46.570057 4810 generic.go:334] "Generic (PLEG): container finished" podID="58f3a749-f75e-4359-975c-cde9d6cca018" containerID="963484e64b9d0a7cddbb6c37324579efe31f852300e7dcf6846254d8c6480cfe" exitCode=0 Jan 10 07:04:46 crc kubenswrapper[4810]: I0110 07:04:46.570289 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" event={"ID":"58f3a749-f75e-4359-975c-cde9d6cca018","Type":"ContainerDied","Data":"963484e64b9d0a7cddbb6c37324579efe31f852300e7dcf6846254d8c6480cfe"} Jan 10 07:04:46 crc kubenswrapper[4810]: I0110 07:04:46.570974 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" event={"ID":"58f3a749-f75e-4359-975c-cde9d6cca018","Type":"ContainerStarted","Data":"ef646e59c5303350d24a7a62a5937cde81e26e73c5ffb29828a438bbb948feb8"} Jan 10 07:04:47 crc kubenswrapper[4810]: I0110 07:04:47.590332 4810 generic.go:334] "Generic (PLEG): container finished" podID="58f3a749-f75e-4359-975c-cde9d6cca018" containerID="172764c6148fb4b814d4026834e102edecd8ac340089e85119824db36e120b7f" exitCode=0 Jan 10 07:04:47 crc kubenswrapper[4810]: I0110 07:04:47.592653 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" event={"ID":"58f3a749-f75e-4359-975c-cde9d6cca018","Type":"ContainerDied","Data":"172764c6148fb4b814d4026834e102edecd8ac340089e85119824db36e120b7f"} Jan 10 07:04:48 crc kubenswrapper[4810]: I0110 07:04:48.603963 4810 generic.go:334] "Generic (PLEG): container finished" podID="dd9db040-7192-4fdb-951f-07e206f32728" containerID="5a0c21d6bf258ac463f0ecd86f456a01ed37a6415e99ee7aab1e88d2796c087f" exitCode=0 Jan 10 07:04:48 crc kubenswrapper[4810]: I0110 07:04:48.604052 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-l684q" event={"ID":"dd9db040-7192-4fdb-951f-07e206f32728","Type":"ContainerDied","Data":"5a0c21d6bf258ac463f0ecd86f456a01ed37a6415e99ee7aab1e88d2796c087f"} Jan 10 07:04:48 crc kubenswrapper[4810]: I0110 07:04:48.609170 4810 generic.go:334] "Generic (PLEG): container finished" podID="58f3a749-f75e-4359-975c-cde9d6cca018" containerID="3f9337297a17f474a93fe102b86571b82440edbaff3fd5514a7fdbfd0d2f447b" exitCode=0 Jan 10 07:04:48 crc kubenswrapper[4810]: I0110 07:04:48.609308 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" event={"ID":"58f3a749-f75e-4359-975c-cde9d6cca018","Type":"ContainerDied","Data":"3f9337297a17f474a93fe102b86571b82440edbaff3fd5514a7fdbfd0d2f447b"} Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.011330 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.075496 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-l684q" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.185815 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd9db040-7192-4fdb-951f-07e206f32728-db-sync-config-data\") pod \"dd9db040-7192-4fdb-951f-07e206f32728\" (UID: \"dd9db040-7192-4fdb-951f-07e206f32728\") " Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.185886 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-util\") pod \"58f3a749-f75e-4359-975c-cde9d6cca018\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.185911 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-bundle\") pod \"58f3a749-f75e-4359-975c-cde9d6cca018\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.186017 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj6fg\" (UniqueName: \"kubernetes.io/projected/58f3a749-f75e-4359-975c-cde9d6cca018-kube-api-access-vj6fg\") pod \"58f3a749-f75e-4359-975c-cde9d6cca018\" (UID: \"58f3a749-f75e-4359-975c-cde9d6cca018\") " Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.186042 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpjxb\" (UniqueName: \"kubernetes.io/projected/dd9db040-7192-4fdb-951f-07e206f32728-kube-api-access-wpjxb\") pod \"dd9db040-7192-4fdb-951f-07e206f32728\" (UID: \"dd9db040-7192-4fdb-951f-07e206f32728\") " Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.187287 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-bundle" (OuterVolumeSpecName: "bundle") pod "58f3a749-f75e-4359-975c-cde9d6cca018" (UID: "58f3a749-f75e-4359-975c-cde9d6cca018"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.197526 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd9db040-7192-4fdb-951f-07e206f32728-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "dd9db040-7192-4fdb-951f-07e206f32728" (UID: "dd9db040-7192-4fdb-951f-07e206f32728"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.197590 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f3a749-f75e-4359-975c-cde9d6cca018-kube-api-access-vj6fg" (OuterVolumeSpecName: "kube-api-access-vj6fg") pod "58f3a749-f75e-4359-975c-cde9d6cca018" (UID: "58f3a749-f75e-4359-975c-cde9d6cca018"). InnerVolumeSpecName "kube-api-access-vj6fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.197652 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9db040-7192-4fdb-951f-07e206f32728-kube-api-access-wpjxb" (OuterVolumeSpecName: "kube-api-access-wpjxb") pod "dd9db040-7192-4fdb-951f-07e206f32728" (UID: "dd9db040-7192-4fdb-951f-07e206f32728"). InnerVolumeSpecName "kube-api-access-wpjxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.219833 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-util" (OuterVolumeSpecName: "util") pod "58f3a749-f75e-4359-975c-cde9d6cca018" (UID: "58f3a749-f75e-4359-975c-cde9d6cca018"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.287575 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj6fg\" (UniqueName: \"kubernetes.io/projected/58f3a749-f75e-4359-975c-cde9d6cca018-kube-api-access-vj6fg\") on node \"crc\" DevicePath \"\"" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.287631 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpjxb\" (UniqueName: \"kubernetes.io/projected/dd9db040-7192-4fdb-951f-07e206f32728-kube-api-access-wpjxb\") on node \"crc\" DevicePath \"\"" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.287655 4810 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd9db040-7192-4fdb-951f-07e206f32728-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.287672 4810 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-util\") on node \"crc\" DevicePath \"\"" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.287691 4810 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/58f3a749-f75e-4359-975c-cde9d6cca018-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.629947 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-db-sync-l684q" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.629952 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-db-sync-l684q" event={"ID":"dd9db040-7192-4fdb-951f-07e206f32728","Type":"ContainerDied","Data":"37c9af2f0b5d2b792810d05e2f372e1be0d365feb31f3d21c68d072b46b161bd"} Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.630018 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37c9af2f0b5d2b792810d05e2f372e1be0d365feb31f3d21c68d072b46b161bd" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.633034 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" event={"ID":"58f3a749-f75e-4359-975c-cde9d6cca018","Type":"ContainerDied","Data":"ef646e59c5303350d24a7a62a5937cde81e26e73c5ffb29828a438bbb948feb8"} Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.633062 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef646e59c5303350d24a7a62a5937cde81e26e73c5ffb29828a438bbb948feb8" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.633144 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.951854 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw"] Jan 10 07:04:50 crc kubenswrapper[4810]: E0110 07:04:50.952124 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f3a749-f75e-4359-975c-cde9d6cca018" containerName="util" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.952137 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f3a749-f75e-4359-975c-cde9d6cca018" containerName="util" Jan 10 07:04:50 crc kubenswrapper[4810]: E0110 07:04:50.952153 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f3a749-f75e-4359-975c-cde9d6cca018" containerName="extract" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.952161 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f3a749-f75e-4359-975c-cde9d6cca018" containerName="extract" Jan 10 07:04:50 crc kubenswrapper[4810]: E0110 07:04:50.952184 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f3a749-f75e-4359-975c-cde9d6cca018" containerName="pull" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.952213 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f3a749-f75e-4359-975c-cde9d6cca018" containerName="pull" Jan 10 07:04:50 crc kubenswrapper[4810]: E0110 07:04:50.952233 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9db040-7192-4fdb-951f-07e206f32728" containerName="barbican-db-sync" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.952242 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9db040-7192-4fdb-951f-07e206f32728" containerName="barbican-db-sync" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.952369 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9db040-7192-4fdb-951f-07e206f32728" containerName="barbican-db-sync" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.952392 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="58f3a749-f75e-4359-975c-cde9d6cca018" containerName="extract" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.953090 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.955623 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-barbican-dockercfg-zfq4x" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.955775 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-worker-config-data" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.955784 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-config-data" Jan 10 07:04:50 crc kubenswrapper[4810]: I0110 07:04:50.987663 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw"] Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.017271 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx"] Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.018520 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.020502 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-keystone-listener-config-data" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.036394 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx"] Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.099186 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.099273 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-logs\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.099299 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqkkr\" (UniqueName: \"kubernetes.io/projected/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-kube-api-access-mqkkr\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.099342 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data-custom\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.164280 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican-api-867d7c779b-gb8r6"] Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.165520 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.167563 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"barbican-api-config-data" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.188950 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-867d7c779b-gb8r6"] Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.200833 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53844ae-14bb-4c0b-808e-3a198ad77d3c-logs\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.200904 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-logs\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.200938 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqkkr\" (UniqueName: \"kubernetes.io/projected/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-kube-api-access-mqkkr\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.200995 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data-custom\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.201048 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.201082 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data-custom\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.201106 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.201126 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfjh7\" (UniqueName: \"kubernetes.io/projected/c53844ae-14bb-4c0b-808e-3a198ad77d3c-kube-api-access-zfjh7\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.201611 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-logs\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.211728 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.213333 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data-custom\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.223611 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqkkr\" (UniqueName: \"kubernetes.io/projected/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-kube-api-access-mqkkr\") pod \"barbican-worker-799d94b4df-cbsxw\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.301712 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.302083 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f8d132-32ca-40ae-a116-0684c284eb4e-logs\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.302501 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data-custom\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.302538 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.302557 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfjh7\" (UniqueName: \"kubernetes.io/projected/c53844ae-14bb-4c0b-808e-3a198ad77d3c-kube-api-access-zfjh7\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.302603 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data-custom\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.302652 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr994\" (UniqueName: \"kubernetes.io/projected/70f8d132-32ca-40ae-a116-0684c284eb4e-kube-api-access-zr994\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.302671 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.302703 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53844ae-14bb-4c0b-808e-3a198ad77d3c-logs\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.303235 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53844ae-14bb-4c0b-808e-3a198ad77d3c-logs\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.320970 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data-custom\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.321386 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.335694 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfjh7\" (UniqueName: \"kubernetes.io/projected/c53844ae-14bb-4c0b-808e-3a198ad77d3c-kube-api-access-zfjh7\") pod \"barbican-keystone-listener-5579f95b8d-5grzx\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.335915 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.404086 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr994\" (UniqueName: \"kubernetes.io/projected/70f8d132-32ca-40ae-a116-0684c284eb4e-kube-api-access-zr994\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.404131 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.404228 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f8d132-32ca-40ae-a116-0684c284eb4e-logs\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.404265 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data-custom\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.405277 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f8d132-32ca-40ae-a116-0684c284eb4e-logs\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.408877 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data-custom\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.411073 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.428616 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr994\" (UniqueName: \"kubernetes.io/projected/70f8d132-32ca-40ae-a116-0684c284eb4e-kube-api-access-zr994\") pod \"barbican-api-867d7c779b-gb8r6\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.477791 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.700250 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-api-867d7c779b-gb8r6"] Jan 10 07:04:51 crc kubenswrapper[4810]: W0110 07:04:51.701078 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70f8d132_32ca_40ae_a116_0684c284eb4e.slice/crio-05ad3520d7b263d7d7b730f6f785b8d41ff8e8206a3c2998a4d3ebb56da17822 WatchSource:0}: Error finding container 05ad3520d7b263d7d7b730f6f785b8d41ff8e8206a3c2998a4d3ebb56da17822: Status 404 returned error can't find the container with id 05ad3520d7b263d7d7b730f6f785b8d41ff8e8206a3c2998a4d3ebb56da17822 Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.728695 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw"] Jan 10 07:04:51 crc kubenswrapper[4810]: W0110 07:04:51.733814 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ae7f276_37e7_4ef5_831a_4c8b911bfc84.slice/crio-9d8318dc82d6fb25f7d4747e0bafe08afdc9d78f64037163e167ac894216356c WatchSource:0}: Error finding container 9d8318dc82d6fb25f7d4747e0bafe08afdc9d78f64037163e167ac894216356c: Status 404 returned error can't find the container with id 9d8318dc82d6fb25f7d4747e0bafe08afdc9d78f64037163e167ac894216356c Jan 10 07:04:51 crc kubenswrapper[4810]: I0110 07:04:51.821724 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx"] Jan 10 07:04:51 crc kubenswrapper[4810]: W0110 07:04:51.829445 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc53844ae_14bb_4c0b_808e_3a198ad77d3c.slice/crio-a2d5137c72221fc844222ad988a1d7f4207fb4da547f25fa566cb86b4d5040fc WatchSource:0}: Error finding container a2d5137c72221fc844222ad988a1d7f4207fb4da547f25fa566cb86b4d5040fc: Status 404 returned error can't find the container with id a2d5137c72221fc844222ad988a1d7f4207fb4da547f25fa566cb86b4d5040fc Jan 10 07:04:52 crc kubenswrapper[4810]: I0110 07:04:52.648577 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" event={"ID":"9ae7f276-37e7-4ef5-831a-4c8b911bfc84","Type":"ContainerStarted","Data":"9d8318dc82d6fb25f7d4747e0bafe08afdc9d78f64037163e167ac894216356c"} Jan 10 07:04:52 crc kubenswrapper[4810]: I0110 07:04:52.650393 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" event={"ID":"70f8d132-32ca-40ae-a116-0684c284eb4e","Type":"ContainerStarted","Data":"341a4f04aff29fad2fb9ca7620978f8049217d34ad6b6a718b0d5e6470e17d91"} Jan 10 07:04:52 crc kubenswrapper[4810]: I0110 07:04:52.650447 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" event={"ID":"70f8d132-32ca-40ae-a116-0684c284eb4e","Type":"ContainerStarted","Data":"b1c2f09df1781147b6b8ccf4a85b36113f6cbad2bff102b1f466ae4e62836d2b"} Jan 10 07:04:52 crc kubenswrapper[4810]: I0110 07:04:52.650457 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" event={"ID":"70f8d132-32ca-40ae-a116-0684c284eb4e","Type":"ContainerStarted","Data":"05ad3520d7b263d7d7b730f6f785b8d41ff8e8206a3c2998a4d3ebb56da17822"} Jan 10 07:04:52 crc kubenswrapper[4810]: I0110 07:04:52.651409 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:52 crc kubenswrapper[4810]: I0110 07:04:52.651433 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:52 crc kubenswrapper[4810]: I0110 07:04:52.652258 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" event={"ID":"c53844ae-14bb-4c0b-808e-3a198ad77d3c","Type":"ContainerStarted","Data":"a2d5137c72221fc844222ad988a1d7f4207fb4da547f25fa566cb86b4d5040fc"} Jan 10 07:04:52 crc kubenswrapper[4810]: I0110 07:04:52.679403 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" podStartSLOduration=1.679385157 podStartE2EDuration="1.679385157s" podCreationTimestamp="2026-01-10 07:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:04:52.675314339 +0000 UTC m=+1121.290807282" watchObservedRunningTime="2026-01-10 07:04:52.679385157 +0000 UTC m=+1121.294878040" Jan 10 07:04:53 crc kubenswrapper[4810]: I0110 07:04:53.663482 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" event={"ID":"c53844ae-14bb-4c0b-808e-3a198ad77d3c","Type":"ContainerStarted","Data":"8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636"} Jan 10 07:04:53 crc kubenswrapper[4810]: I0110 07:04:53.669393 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" event={"ID":"9ae7f276-37e7-4ef5-831a-4c8b911bfc84","Type":"ContainerStarted","Data":"f5185552ac3bc1bd54632d70763e412461cc956133d4f5ecf1c39e38b6ad3981"} Jan 10 07:04:53 crc kubenswrapper[4810]: I0110 07:04:53.669645 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" event={"ID":"9ae7f276-37e7-4ef5-831a-4c8b911bfc84","Type":"ContainerStarted","Data":"f5149bac691ab2cf15c4be9553e38e23aa809552c6038100850b95503d844ceb"} Jan 10 07:04:53 crc kubenswrapper[4810]: I0110 07:04:53.688528 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" podStartSLOduration=2.626164018 podStartE2EDuration="3.688497742s" podCreationTimestamp="2026-01-10 07:04:50 +0000 UTC" firstStartedPulling="2026-01-10 07:04:51.735677043 +0000 UTC m=+1120.351169926" lastFinishedPulling="2026-01-10 07:04:52.798010757 +0000 UTC m=+1121.413503650" observedRunningTime="2026-01-10 07:04:53.68840215 +0000 UTC m=+1122.303895053" watchObservedRunningTime="2026-01-10 07:04:53.688497742 +0000 UTC m=+1122.303990675" Jan 10 07:04:54 crc kubenswrapper[4810]: I0110 07:04:54.674060 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" event={"ID":"c53844ae-14bb-4c0b-808e-3a198ad77d3c","Type":"ContainerStarted","Data":"6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f"} Jan 10 07:04:54 crc kubenswrapper[4810]: I0110 07:04:54.698349 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" podStartSLOduration=3.155844555 podStartE2EDuration="4.698321984s" podCreationTimestamp="2026-01-10 07:04:50 +0000 UTC" firstStartedPulling="2026-01-10 07:04:51.83126634 +0000 UTC m=+1120.446759233" lastFinishedPulling="2026-01-10 07:04:53.373743769 +0000 UTC m=+1121.989236662" observedRunningTime="2026-01-10 07:04:54.692660058 +0000 UTC m=+1123.308152981" watchObservedRunningTime="2026-01-10 07:04:54.698321984 +0000 UTC m=+1123.313814917" Jan 10 07:04:57 crc kubenswrapper[4810]: I0110 07:04:57.914054 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:04:57 crc kubenswrapper[4810]: I0110 07:04:57.980360 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.292390 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz"] Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.293976 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.296671 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-w9dn7" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.296902 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.316631 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz"] Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.377505 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-webhook-cert\") pod \"swift-operator-controller-manager-6ddd477b75-wq6zz\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.377565 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-apiservice-cert\") pod \"swift-operator-controller-manager-6ddd477b75-wq6zz\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.377610 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6pd9\" (UniqueName: \"kubernetes.io/projected/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-kube-api-access-w6pd9\") pod \"swift-operator-controller-manager-6ddd477b75-wq6zz\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.479396 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-apiservice-cert\") pod \"swift-operator-controller-manager-6ddd477b75-wq6zz\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.479490 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6pd9\" (UniqueName: \"kubernetes.io/projected/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-kube-api-access-w6pd9\") pod \"swift-operator-controller-manager-6ddd477b75-wq6zz\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.479561 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-webhook-cert\") pod \"swift-operator-controller-manager-6ddd477b75-wq6zz\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.484800 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-apiservice-cert\") pod \"swift-operator-controller-manager-6ddd477b75-wq6zz\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.488099 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-webhook-cert\") pod \"swift-operator-controller-manager-6ddd477b75-wq6zz\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.497857 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6pd9\" (UniqueName: \"kubernetes.io/projected/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-kube-api-access-w6pd9\") pod \"swift-operator-controller-manager-6ddd477b75-wq6zz\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:01 crc kubenswrapper[4810]: I0110 07:05:01.610176 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:02 crc kubenswrapper[4810]: I0110 07:05:02.111468 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz"] Jan 10 07:05:02 crc kubenswrapper[4810]: I0110 07:05:02.762507 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" event={"ID":"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee","Type":"ContainerStarted","Data":"ebea0ebbf22fdf163bb92ff44d3dd495dcb6da1c98dac28d044293020ad440c4"} Jan 10 07:05:05 crc kubenswrapper[4810]: I0110 07:05:05.807049 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" event={"ID":"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee","Type":"ContainerStarted","Data":"94b80991cc8e7345157481034bf2533ae23fd39de75ed8e18e6dc57c7eaea2fb"} Jan 10 07:05:05 crc kubenswrapper[4810]: I0110 07:05:05.808431 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:05 crc kubenswrapper[4810]: I0110 07:05:05.831554 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" podStartSLOduration=2.8878370589999998 podStartE2EDuration="4.831530858s" podCreationTimestamp="2026-01-10 07:05:01 +0000 UTC" firstStartedPulling="2026-01-10 07:05:02.107235029 +0000 UTC m=+1130.722727912" lastFinishedPulling="2026-01-10 07:05:04.050928818 +0000 UTC m=+1132.666421711" observedRunningTime="2026-01-10 07:05:05.829030659 +0000 UTC m=+1134.444523552" watchObservedRunningTime="2026-01-10 07:05:05.831530858 +0000 UTC m=+1134.447023751" Jan 10 07:05:11 crc kubenswrapper[4810]: I0110 07:05:11.616837 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.603501 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.607868 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.610609 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.611627 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.611645 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-t69wq" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.612714 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.644532 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.715876 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.716239 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2726r\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-kube-api-access-2726r\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.716309 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-lock\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.716352 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-cache\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.716432 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.818025 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-lock\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.818080 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-cache\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.818120 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.818163 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.818180 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2726r\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-kube-api-access-2726r\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: E0110 07:05:21.818381 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:05:21 crc kubenswrapper[4810]: E0110 07:05:21.818395 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:05:21 crc kubenswrapper[4810]: E0110 07:05:21.818439 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift podName:a75358c5-e04b-409e-8ba5-c5e184d656b1 nodeName:}" failed. No retries permitted until 2026-01-10 07:05:22.318423298 +0000 UTC m=+1150.933916181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift") pod "swift-storage-0" (UID: "a75358c5-e04b-409e-8ba5-c5e184d656b1") : configmap "swift-ring-files" not found Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.818576 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-cache\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.818578 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") device mount path \"/mnt/openstack/pv02\"" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.818904 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-lock\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.844255 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2726r\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-kube-api-access-2726r\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:21 crc kubenswrapper[4810]: I0110 07:05:21.847042 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:22 crc kubenswrapper[4810]: I0110 07:05:22.326556 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:22 crc kubenswrapper[4810]: E0110 07:05:22.326856 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:05:22 crc kubenswrapper[4810]: E0110 07:05:22.326894 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:05:22 crc kubenswrapper[4810]: E0110 07:05:22.327030 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift podName:a75358c5-e04b-409e-8ba5-c5e184d656b1 nodeName:}" failed. No retries permitted until 2026-01-10 07:05:23.326996558 +0000 UTC m=+1151.942489481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift") pod "swift-storage-0" (UID: "a75358c5-e04b-409e-8ba5-c5e184d656b1") : configmap "swift-ring-files" not found Jan 10 07:05:23 crc kubenswrapper[4810]: I0110 07:05:23.342665 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:23 crc kubenswrapper[4810]: E0110 07:05:23.343115 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:05:23 crc kubenswrapper[4810]: E0110 07:05:23.343134 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:05:23 crc kubenswrapper[4810]: E0110 07:05:23.343215 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift podName:a75358c5-e04b-409e-8ba5-c5e184d656b1 nodeName:}" failed. No retries permitted until 2026-01-10 07:05:25.343173263 +0000 UTC m=+1153.958666156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift") pod "swift-storage-0" (UID: "a75358c5-e04b-409e-8ba5-c5e184d656b1") : configmap "swift-ring-files" not found Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.154594 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq"] Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.156065 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.158297 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.165511 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq"] Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.272226 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-run-httpd\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.272614 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84852f5-9196-4304-8c33-83cfa9bfc818-config-data\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.272702 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2r44\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-kube-api-access-l2r44\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.272803 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.272838 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-log-httpd\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.373950 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2r44\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-kube-api-access-l2r44\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.374047 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.374072 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-log-httpd\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.374110 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.374158 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-run-httpd\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.374178 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84852f5-9196-4304-8c33-83cfa9bfc818-config-data\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: E0110 07:05:25.374323 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:05:25 crc kubenswrapper[4810]: E0110 07:05:25.374348 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:05:25 crc kubenswrapper[4810]: E0110 07:05:25.374359 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:05:25 crc kubenswrapper[4810]: E0110 07:05:25.374402 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq: configmap "swift-ring-files" not found Jan 10 07:05:25 crc kubenswrapper[4810]: E0110 07:05:25.374403 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift podName:a75358c5-e04b-409e-8ba5-c5e184d656b1 nodeName:}" failed. No retries permitted until 2026-01-10 07:05:29.374386635 +0000 UTC m=+1157.989879518 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift") pod "swift-storage-0" (UID: "a75358c5-e04b-409e-8ba5-c5e184d656b1") : configmap "swift-ring-files" not found Jan 10 07:05:25 crc kubenswrapper[4810]: E0110 07:05:25.374501 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift podName:e84852f5-9196-4304-8c33-83cfa9bfc818 nodeName:}" failed. No retries permitted until 2026-01-10 07:05:25.874473177 +0000 UTC m=+1154.489966140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift") pod "swift-proxy-67f6cc5479-mj6nq" (UID: "e84852f5-9196-4304-8c33-83cfa9bfc818") : configmap "swift-ring-files" not found Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.375100 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-run-httpd\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.375622 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-log-httpd\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.381431 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84852f5-9196-4304-8c33-83cfa9bfc818-config-data\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.404516 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2r44\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-kube-api-access-l2r44\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.667493 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9fr49"] Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.668796 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.670773 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.677268 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9fr49"] Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.679268 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.779528 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-dispersionconf\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.779584 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/09bccd5c-f025-466d-8af8-aa53e8033a90-etc-swift\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.779609 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-scripts\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.779707 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-ring-data-devices\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.779810 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26gc\" (UniqueName: \"kubernetes.io/projected/09bccd5c-f025-466d-8af8-aa53e8033a90-kube-api-access-n26gc\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.779926 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-swiftconf\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.881549 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-ring-data-devices\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.881601 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n26gc\" (UniqueName: \"kubernetes.io/projected/09bccd5c-f025-466d-8af8-aa53e8033a90-kube-api-access-n26gc\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.881633 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.881652 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-swiftconf\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.881686 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-dispersionconf\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.881717 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/09bccd5c-f025-466d-8af8-aa53e8033a90-etc-swift\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.881758 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-scripts\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.882487 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-scripts\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.882904 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-ring-data-devices\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: E0110 07:05:25.883222 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:05:25 crc kubenswrapper[4810]: E0110 07:05:25.883240 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq: configmap "swift-ring-files" not found Jan 10 07:05:25 crc kubenswrapper[4810]: E0110 07:05:25.883272 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift podName:e84852f5-9196-4304-8c33-83cfa9bfc818 nodeName:}" failed. No retries permitted until 2026-01-10 07:05:26.883260712 +0000 UTC m=+1155.498753595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift") pod "swift-proxy-67f6cc5479-mj6nq" (UID: "e84852f5-9196-4304-8c33-83cfa9bfc818") : configmap "swift-ring-files" not found Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.884040 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/09bccd5c-f025-466d-8af8-aa53e8033a90-etc-swift\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.886103 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-swiftconf\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.897668 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-dispersionconf\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.901068 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n26gc\" (UniqueName: \"kubernetes.io/projected/09bccd5c-f025-466d-8af8-aa53e8033a90-kube-api-access-n26gc\") pod \"swift-ring-rebalance-9fr49\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:25 crc kubenswrapper[4810]: I0110 07:05:25.990107 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:26 crc kubenswrapper[4810]: I0110 07:05:26.421278 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9fr49"] Jan 10 07:05:26 crc kubenswrapper[4810]: W0110 07:05:26.428073 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09bccd5c_f025_466d_8af8_aa53e8033a90.slice/crio-f091c6d91d76f1fae0ebd718e51ae5a95815b4ff8f2409f70518530b665daf32 WatchSource:0}: Error finding container f091c6d91d76f1fae0ebd718e51ae5a95815b4ff8f2409f70518530b665daf32: Status 404 returned error can't find the container with id f091c6d91d76f1fae0ebd718e51ae5a95815b4ff8f2409f70518530b665daf32 Jan 10 07:05:26 crc kubenswrapper[4810]: I0110 07:05:26.897894 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:26 crc kubenswrapper[4810]: E0110 07:05:26.898144 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:05:26 crc kubenswrapper[4810]: E0110 07:05:26.898168 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq: configmap "swift-ring-files" not found Jan 10 07:05:26 crc kubenswrapper[4810]: E0110 07:05:26.898254 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift podName:e84852f5-9196-4304-8c33-83cfa9bfc818 nodeName:}" failed. No retries permitted until 2026-01-10 07:05:28.898235366 +0000 UTC m=+1157.513728259 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift") pod "swift-proxy-67f6cc5479-mj6nq" (UID: "e84852f5-9196-4304-8c33-83cfa9bfc818") : configmap "swift-ring-files" not found Jan 10 07:05:26 crc kubenswrapper[4810]: I0110 07:05:26.976029 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" event={"ID":"09bccd5c-f025-466d-8af8-aa53e8033a90","Type":"ContainerStarted","Data":"f091c6d91d76f1fae0ebd718e51ae5a95815b4ff8f2409f70518530b665daf32"} Jan 10 07:05:28 crc kubenswrapper[4810]: I0110 07:05:28.932305 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:28 crc kubenswrapper[4810]: E0110 07:05:28.932515 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:05:28 crc kubenswrapper[4810]: E0110 07:05:28.932949 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq: configmap "swift-ring-files" not found Jan 10 07:05:28 crc kubenswrapper[4810]: E0110 07:05:28.933010 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift podName:e84852f5-9196-4304-8c33-83cfa9bfc818 nodeName:}" failed. No retries permitted until 2026-01-10 07:05:32.932991664 +0000 UTC m=+1161.548484547 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift") pod "swift-proxy-67f6cc5479-mj6nq" (UID: "e84852f5-9196-4304-8c33-83cfa9bfc818") : configmap "swift-ring-files" not found Jan 10 07:05:29 crc kubenswrapper[4810]: I0110 07:05:29.439944 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:29 crc kubenswrapper[4810]: E0110 07:05:29.440122 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:05:29 crc kubenswrapper[4810]: E0110 07:05:29.440502 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:05:29 crc kubenswrapper[4810]: E0110 07:05:29.440637 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift podName:a75358c5-e04b-409e-8ba5-c5e184d656b1 nodeName:}" failed. No retries permitted until 2026-01-10 07:05:37.44062184 +0000 UTC m=+1166.056114723 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift") pod "swift-storage-0" (UID: "a75358c5-e04b-409e-8ba5-c5e184d656b1") : configmap "swift-ring-files" not found Jan 10 07:05:29 crc kubenswrapper[4810]: I0110 07:05:29.998150 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" event={"ID":"09bccd5c-f025-466d-8af8-aa53e8033a90","Type":"ContainerStarted","Data":"c4b4aa3cf95cfef3de65cc4b9219ddcf3f5780f99f3a2d029ccbe580282dff46"} Jan 10 07:05:32 crc kubenswrapper[4810]: I0110 07:05:32.991769 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:32 crc kubenswrapper[4810]: E0110 07:05:32.992027 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:05:32 crc kubenswrapper[4810]: E0110 07:05:32.992285 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq: configmap "swift-ring-files" not found Jan 10 07:05:32 crc kubenswrapper[4810]: E0110 07:05:32.992375 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift podName:e84852f5-9196-4304-8c33-83cfa9bfc818 nodeName:}" failed. No retries permitted until 2026-01-10 07:05:40.992346983 +0000 UTC m=+1169.607839916 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift") pod "swift-proxy-67f6cc5479-mj6nq" (UID: "e84852f5-9196-4304-8c33-83cfa9bfc818") : configmap "swift-ring-files" not found Jan 10 07:05:36 crc kubenswrapper[4810]: I0110 07:05:36.044270 4810 generic.go:334] "Generic (PLEG): container finished" podID="09bccd5c-f025-466d-8af8-aa53e8033a90" containerID="c4b4aa3cf95cfef3de65cc4b9219ddcf3f5780f99f3a2d029ccbe580282dff46" exitCode=0 Jan 10 07:05:36 crc kubenswrapper[4810]: I0110 07:05:36.044349 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" event={"ID":"09bccd5c-f025-466d-8af8-aa53e8033a90","Type":"ContainerDied","Data":"c4b4aa3cf95cfef3de65cc4b9219ddcf3f5780f99f3a2d029ccbe580282dff46"} Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.410801 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.460973 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.467967 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift\") pod \"swift-storage-0\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.546370 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.562268 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-scripts\") pod \"09bccd5c-f025-466d-8af8-aa53e8033a90\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.562325 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/09bccd5c-f025-466d-8af8-aa53e8033a90-etc-swift\") pod \"09bccd5c-f025-466d-8af8-aa53e8033a90\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.562347 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-swiftconf\") pod \"09bccd5c-f025-466d-8af8-aa53e8033a90\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.562391 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-dispersionconf\") pod \"09bccd5c-f025-466d-8af8-aa53e8033a90\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.562467 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n26gc\" (UniqueName: \"kubernetes.io/projected/09bccd5c-f025-466d-8af8-aa53e8033a90-kube-api-access-n26gc\") pod \"09bccd5c-f025-466d-8af8-aa53e8033a90\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.562502 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-ring-data-devices\") pod \"09bccd5c-f025-466d-8af8-aa53e8033a90\" (UID: \"09bccd5c-f025-466d-8af8-aa53e8033a90\") " Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.563177 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "09bccd5c-f025-466d-8af8-aa53e8033a90" (UID: "09bccd5c-f025-466d-8af8-aa53e8033a90"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.564542 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09bccd5c-f025-466d-8af8-aa53e8033a90-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "09bccd5c-f025-466d-8af8-aa53e8033a90" (UID: "09bccd5c-f025-466d-8af8-aa53e8033a90"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.567257 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09bccd5c-f025-466d-8af8-aa53e8033a90-kube-api-access-n26gc" (OuterVolumeSpecName: "kube-api-access-n26gc") pod "09bccd5c-f025-466d-8af8-aa53e8033a90" (UID: "09bccd5c-f025-466d-8af8-aa53e8033a90"). InnerVolumeSpecName "kube-api-access-n26gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.578868 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-scripts" (OuterVolumeSpecName: "scripts") pod "09bccd5c-f025-466d-8af8-aa53e8033a90" (UID: "09bccd5c-f025-466d-8af8-aa53e8033a90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.587115 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "09bccd5c-f025-466d-8af8-aa53e8033a90" (UID: "09bccd5c-f025-466d-8af8-aa53e8033a90"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.597791 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "09bccd5c-f025-466d-8af8-aa53e8033a90" (UID: "09bccd5c-f025-466d-8af8-aa53e8033a90"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.666727 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n26gc\" (UniqueName: \"kubernetes.io/projected/09bccd5c-f025-466d-8af8-aa53e8033a90-kube-api-access-n26gc\") on node \"crc\" DevicePath \"\"" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.667220 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.667249 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/09bccd5c-f025-466d-8af8-aa53e8033a90-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.667273 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/09bccd5c-f025-466d-8af8-aa53e8033a90-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.667299 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:05:37 crc kubenswrapper[4810]: I0110 07:05:37.667323 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/09bccd5c-f025-466d-8af8-aa53e8033a90-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:05:38 crc kubenswrapper[4810]: I0110 07:05:38.022994 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:05:38 crc kubenswrapper[4810]: W0110 07:05:38.029122 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda75358c5_e04b_409e_8ba5_c5e184d656b1.slice/crio-095ba6a7faddbd38dd9ca9ab21a5e56da98a7dcd8fc91a24e92a74a7edcceacc WatchSource:0}: Error finding container 095ba6a7faddbd38dd9ca9ab21a5e56da98a7dcd8fc91a24e92a74a7edcceacc: Status 404 returned error can't find the container with id 095ba6a7faddbd38dd9ca9ab21a5e56da98a7dcd8fc91a24e92a74a7edcceacc Jan 10 07:05:38 crc kubenswrapper[4810]: I0110 07:05:38.064169 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" event={"ID":"09bccd5c-f025-466d-8af8-aa53e8033a90","Type":"ContainerDied","Data":"f091c6d91d76f1fae0ebd718e51ae5a95815b4ff8f2409f70518530b665daf32"} Jan 10 07:05:38 crc kubenswrapper[4810]: I0110 07:05:38.064229 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f091c6d91d76f1fae0ebd718e51ae5a95815b4ff8f2409f70518530b665daf32" Jan 10 07:05:38 crc kubenswrapper[4810]: I0110 07:05:38.064279 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-9fr49" Jan 10 07:05:38 crc kubenswrapper[4810]: I0110 07:05:38.066559 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"095ba6a7faddbd38dd9ca9ab21a5e56da98a7dcd8fc91a24e92a74a7edcceacc"} Jan 10 07:05:38 crc kubenswrapper[4810]: I0110 07:05:38.349555 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-9fr49_09bccd5c-f025-466d-8af8-aa53e8033a90/swift-ring-rebalance/0.log" Jan 10 07:05:40 crc kubenswrapper[4810]: I0110 07:05:40.017598 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-9fr49_09bccd5c-f025-466d-8af8-aa53e8033a90/swift-ring-rebalance/0.log" Jan 10 07:05:40 crc kubenswrapper[4810]: I0110 07:05:40.082595 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"95c486fe8d34f8e3a1abfdfa11b5486e5df5de527c03cb589876f783d2808f62"} Jan 10 07:05:40 crc kubenswrapper[4810]: I0110 07:05:40.082642 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"fe049bbd23ee68b3d9927ed8f5ba00b69a609bf1e0d7e52d8bc54ab506731b7a"} Jan 10 07:05:41 crc kubenswrapper[4810]: I0110 07:05:41.021998 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:41 crc kubenswrapper[4810]: I0110 07:05:41.041796 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift\") pod \"swift-proxy-67f6cc5479-mj6nq\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:41 crc kubenswrapper[4810]: I0110 07:05:41.077661 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:41 crc kubenswrapper[4810]: I0110 07:05:41.119672 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"3260563e2c3b704cd8b15705b0ee92267bfad2eaba88fa8596c9371b5f5a13cd"} Jan 10 07:05:41 crc kubenswrapper[4810]: I0110 07:05:41.119744 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"4b6d6a8a032838dd1d8cd2fbd8e0daa09d6a922cb568f35445591a7fc8f3d1f2"} Jan 10 07:05:41 crc kubenswrapper[4810]: I0110 07:05:41.537525 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq"] Jan 10 07:05:41 crc kubenswrapper[4810]: I0110 07:05:41.675140 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-9fr49_09bccd5c-f025-466d-8af8-aa53e8033a90/swift-ring-rebalance/0.log" Jan 10 07:05:42 crc kubenswrapper[4810]: W0110 07:05:42.165236 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode84852f5_9196_4304_8c33_83cfa9bfc818.slice/crio-a310c23f2a8fd33365ee278b195092f4618486f04b218651ad91876b48c79e63 WatchSource:0}: Error finding container a310c23f2a8fd33365ee278b195092f4618486f04b218651ad91876b48c79e63: Status 404 returned error can't find the container with id a310c23f2a8fd33365ee278b195092f4618486f04b218651ad91876b48c79e63 Jan 10 07:05:43 crc kubenswrapper[4810]: I0110 07:05:43.138839 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"21b6efddaa7b34fe603e3371ee72d2ba574cf0669eacc560723d3d342413a01e"} Jan 10 07:05:43 crc kubenswrapper[4810]: I0110 07:05:43.139170 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"128fd76c5dd360b4b795efa5f41c530eb21ed7c39a5de3eeed5522f286136f21"} Jan 10 07:05:43 crc kubenswrapper[4810]: I0110 07:05:43.139185 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"18f6d65cd6ed6853502e72ba6cab5c950ef00d118dee5b3215d114acd010284a"} Jan 10 07:05:43 crc kubenswrapper[4810]: I0110 07:05:43.140625 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" event={"ID":"e84852f5-9196-4304-8c33-83cfa9bfc818","Type":"ContainerStarted","Data":"d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6"} Jan 10 07:05:43 crc kubenswrapper[4810]: I0110 07:05:43.140658 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" event={"ID":"e84852f5-9196-4304-8c33-83cfa9bfc818","Type":"ContainerStarted","Data":"7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343"} Jan 10 07:05:43 crc kubenswrapper[4810]: I0110 07:05:43.140670 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" event={"ID":"e84852f5-9196-4304-8c33-83cfa9bfc818","Type":"ContainerStarted","Data":"a310c23f2a8fd33365ee278b195092f4618486f04b218651ad91876b48c79e63"} Jan 10 07:05:43 crc kubenswrapper[4810]: I0110 07:05:43.141279 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:43 crc kubenswrapper[4810]: I0110 07:05:43.141336 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:43 crc kubenswrapper[4810]: I0110 07:05:43.181037 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" podStartSLOduration=18.180999095 podStartE2EDuration="18.180999095s" podCreationTimestamp="2026-01-10 07:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:05:43.165639096 +0000 UTC m=+1171.781131979" watchObservedRunningTime="2026-01-10 07:05:43.180999095 +0000 UTC m=+1171.796491978" Jan 10 07:05:43 crc kubenswrapper[4810]: I0110 07:05:43.343080 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-9fr49_09bccd5c-f025-466d-8af8-aa53e8033a90/swift-ring-rebalance/0.log" Jan 10 07:05:44 crc kubenswrapper[4810]: I0110 07:05:44.149905 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"94459d7a51910321e0eb775de5dfa2e54714b78581becf8aa03039eed74b512e"} Jan 10 07:05:44 crc kubenswrapper[4810]: I0110 07:05:44.977862 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-9fr49_09bccd5c-f025-466d-8af8-aa53e8033a90/swift-ring-rebalance/0.log" Jan 10 07:05:45 crc kubenswrapper[4810]: I0110 07:05:45.164484 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"064cf0033482b08b02beb362cc44504986ec69fa702709878b7dad655cae3a26"} Jan 10 07:05:45 crc kubenswrapper[4810]: I0110 07:05:45.164527 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"23383170fe5100337b18f062e9a9b8cf955cc71256fd95623a318fef9f67313b"} Jan 10 07:05:45 crc kubenswrapper[4810]: I0110 07:05:45.164538 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"bff692951863c7a90a2834a1bbae77bf426a3c39ba2bb08b8418474095658a09"} Jan 10 07:05:45 crc kubenswrapper[4810]: I0110 07:05:45.164547 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"ac1e7fedff8ac508d556d8e7fe41d6c2484320ac090dbb7474a3704c31c76c5b"} Jan 10 07:05:46 crc kubenswrapper[4810]: I0110 07:05:46.176373 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"8824c9a54a2c9217c99cdb159420c8131597b35d901c38d81c169a1941349229"} Jan 10 07:05:46 crc kubenswrapper[4810]: I0110 07:05:46.176749 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"f958e6641185b495f45c8a6faf631ddeebcf22b1498068a61d1ca6dd437539e3"} Jan 10 07:05:46 crc kubenswrapper[4810]: I0110 07:05:46.176775 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerStarted","Data":"7621375edfbec0a12c9033b1cd7a5898ef4065162ea91c44d0ad5d76343dcc36"} Jan 10 07:05:46 crc kubenswrapper[4810]: I0110 07:05:46.219500 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=19.9642834 podStartE2EDuration="26.219479157s" podCreationTimestamp="2026-01-10 07:05:20 +0000 UTC" firstStartedPulling="2026-01-10 07:05:38.031567305 +0000 UTC m=+1166.647060188" lastFinishedPulling="2026-01-10 07:05:44.286763062 +0000 UTC m=+1172.902255945" observedRunningTime="2026-01-10 07:05:46.207872018 +0000 UTC m=+1174.823364921" watchObservedRunningTime="2026-01-10 07:05:46.219479157 +0000 UTC m=+1174.834972060" Jan 10 07:05:46 crc kubenswrapper[4810]: I0110 07:05:46.529915 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-9fr49_09bccd5c-f025-466d-8af8-aa53e8033a90/swift-ring-rebalance/0.log" Jan 10 07:05:48 crc kubenswrapper[4810]: I0110 07:05:48.177027 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-9fr49_09bccd5c-f025-466d-8af8-aa53e8033a90/swift-ring-rebalance/0.log" Jan 10 07:05:49 crc kubenswrapper[4810]: I0110 07:05:49.768001 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-9fr49_09bccd5c-f025-466d-8af8-aa53e8033a90/swift-ring-rebalance/0.log" Jan 10 07:05:51 crc kubenswrapper[4810]: I0110 07:05:51.081160 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:51 crc kubenswrapper[4810]: I0110 07:05:51.089706 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:05:51 crc kubenswrapper[4810]: I0110 07:05:51.441214 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-9fr49_09bccd5c-f025-466d-8af8-aa53e8033a90/swift-ring-rebalance/0.log" Jan 10 07:05:53 crc kubenswrapper[4810]: I0110 07:05:52.980274 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/swift-kuttl-tests_swift-ring-rebalance-9fr49_09bccd5c-f025-466d-8af8-aa53e8033a90/swift-ring-rebalance/0.log" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.393989 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:05:54 crc kubenswrapper[4810]: E0110 07:05:54.394905 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bccd5c-f025-466d-8af8-aa53e8033a90" containerName="swift-ring-rebalance" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.394942 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bccd5c-f025-466d-8af8-aa53e8033a90" containerName="swift-ring-rebalance" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.395289 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="09bccd5c-f025-466d-8af8-aa53e8033a90" containerName="swift-ring-rebalance" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.403709 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.408754 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.417911 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.433596 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.485258 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.533144 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-etc-swift\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.533214 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-cache\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.533239 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-cache\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.533262 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.533299 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.533326 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmqj9\" (UniqueName: \"kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-kube-api-access-rmqj9\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.533344 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-lock\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.533362 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-lock\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.533490 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcwjj\" (UniqueName: \"kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-kube-api-access-vcwjj\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.533617 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-etc-swift\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.636966 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-cache\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637008 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637037 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637088 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-lock\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637104 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmqj9\" (UniqueName: \"kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-kube-api-access-rmqj9\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637120 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-lock\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637165 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcwjj\" (UniqueName: \"kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-kube-api-access-vcwjj\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637223 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-etc-swift\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637252 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-etc-swift\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637265 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-cache\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637442 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") device mount path \"/mnt/openstack/pv03\"" pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637620 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-lock\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637638 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-cache\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637692 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-cache\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.637770 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") device mount path \"/mnt/openstack/pv01\"" pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.639437 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-lock\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.647526 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-etc-swift\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.651071 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-etc-swift\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.657100 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmqj9\" (UniqueName: \"kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-kube-api-access-rmqj9\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.657806 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcwjj\" (UniqueName: \"kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-kube-api-access-vcwjj\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.663699 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-2\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.675670 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-1\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.731396 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:05:54 crc kubenswrapper[4810]: I0110 07:05:54.743233 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:05:55 crc kubenswrapper[4810]: W0110 07:05:55.218792 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e3bd11f_41c4_4d7e_a4db_feb15bf853b7.slice/crio-c45487bafb8409dfac207b9a98e6602d7fd6ad8a19d219c3eb7d40af0a97930f WatchSource:0}: Error finding container c45487bafb8409dfac207b9a98e6602d7fd6ad8a19d219c3eb7d40af0a97930f: Status 404 returned error can't find the container with id c45487bafb8409dfac207b9a98e6602d7fd6ad8a19d219c3eb7d40af0a97930f Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.224205 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.282105 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:05:55 crc kubenswrapper[4810]: W0110 07:05:55.287464 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode59b90ce_9a68_4262_8eca_90acf601a7fe.slice/crio-2f65f2426935bea8f66a62c7b7887cce2cfaef46c7b6fbced559dd3201a6b14b WatchSource:0}: Error finding container 2f65f2426935bea8f66a62c7b7887cce2cfaef46c7b6fbced559dd3201a6b14b: Status 404 returned error can't find the container with id 2f65f2426935bea8f66a62c7b7887cce2cfaef46c7b6fbced559dd3201a6b14b Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.288977 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"c45487bafb8409dfac207b9a98e6602d7fd6ad8a19d219c3eb7d40af0a97930f"} Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.505665 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9fr49"] Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.510918 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-9fr49"] Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.537692 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-mtptn"] Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.538575 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.540816 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.541158 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.557688 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-mtptn"] Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.671520 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-dispersionconf\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.671564 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-ring-data-devices\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.671627 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vr2x\" (UniqueName: \"kubernetes.io/projected/d1939e73-ee85-4927-a954-7ca45c80767a-kube-api-access-7vr2x\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.671656 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-swiftconf\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.671680 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1939e73-ee85-4927-a954-7ca45c80767a-etc-swift\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.671705 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-scripts\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.705878 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09bccd5c-f025-466d-8af8-aa53e8033a90" path="/var/lib/kubelet/pods/09bccd5c-f025-466d-8af8-aa53e8033a90/volumes" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.772921 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-swiftconf\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.772984 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1939e73-ee85-4927-a954-7ca45c80767a-etc-swift\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.773035 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-scripts\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.773079 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-dispersionconf\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.773099 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-ring-data-devices\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.773147 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vr2x\" (UniqueName: \"kubernetes.io/projected/d1939e73-ee85-4927-a954-7ca45c80767a-kube-api-access-7vr2x\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.773666 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1939e73-ee85-4927-a954-7ca45c80767a-etc-swift\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.774255 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-scripts\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.774445 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-ring-data-devices\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.779100 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-dispersionconf\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.779723 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-swiftconf\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.789103 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vr2x\" (UniqueName: \"kubernetes.io/projected/d1939e73-ee85-4927-a954-7ca45c80767a-kube-api-access-7vr2x\") pod \"swift-ring-rebalance-mtptn\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:55 crc kubenswrapper[4810]: I0110 07:05:55.852487 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:05:56 crc kubenswrapper[4810]: I0110 07:05:56.298183 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"2f65f2426935bea8f66a62c7b7887cce2cfaef46c7b6fbced559dd3201a6b14b"} Jan 10 07:05:56 crc kubenswrapper[4810]: I0110 07:05:56.347498 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-mtptn"] Jan 10 07:05:56 crc kubenswrapper[4810]: W0110 07:05:56.355366 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1939e73_ee85_4927_a954_7ca45c80767a.slice/crio-6582f8d26e222ff8dbcf8a5be9814da89faaa54f0872061594eb5115954c35fb WatchSource:0}: Error finding container 6582f8d26e222ff8dbcf8a5be9814da89faaa54f0872061594eb5115954c35fb: Status 404 returned error can't find the container with id 6582f8d26e222ff8dbcf8a5be9814da89faaa54f0872061594eb5115954c35fb Jan 10 07:05:57 crc kubenswrapper[4810]: I0110 07:05:57.312138 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" event={"ID":"d1939e73-ee85-4927-a954-7ca45c80767a","Type":"ContainerStarted","Data":"afe7d4c70a56c2bfed15eeca41c87d18bf47895f7ef6e1303daa21c5e17afc5b"} Jan 10 07:05:57 crc kubenswrapper[4810]: I0110 07:05:57.312563 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" event={"ID":"d1939e73-ee85-4927-a954-7ca45c80767a","Type":"ContainerStarted","Data":"6582f8d26e222ff8dbcf8a5be9814da89faaa54f0872061594eb5115954c35fb"} Jan 10 07:05:57 crc kubenswrapper[4810]: I0110 07:05:57.317043 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"aff81c52ba040f2c55fcc06f5348292a947d3ea6768b22daae2544a468781d09"} Jan 10 07:05:57 crc kubenswrapper[4810]: I0110 07:05:57.317083 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"a3c4a778920bdcfc162059546a353e135dcd2ea1b6a0a0884e4abaef4653d6aa"} Jan 10 07:05:57 crc kubenswrapper[4810]: I0110 07:05:57.317103 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"372c8eaafcd6ce6c9f2c2f62c996f9a11292af7ab244908e51f3b20940be2740"} Jan 10 07:05:57 crc kubenswrapper[4810]: I0110 07:05:57.318841 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"3d8c21866fabab0d70e8e1d35b7b08821112e6fa634bb2c1b0f701b2b94119f8"} Jan 10 07:05:57 crc kubenswrapper[4810]: I0110 07:05:57.318875 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"8761d3461a532827e0ed249aaa7c00b8567a3d711e8a1c6092d396f95c336467"} Jan 10 07:05:57 crc kubenswrapper[4810]: I0110 07:05:57.318893 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"8258765d91bdb5e3f54d64e10aec0b6b1cfe07a3ce580829e9a678bb13a19aed"} Jan 10 07:05:57 crc kubenswrapper[4810]: I0110 07:05:57.337594 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" podStartSLOduration=2.337571219 podStartE2EDuration="2.337571219s" podCreationTimestamp="2026-01-10 07:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:05:57.33593752 +0000 UTC m=+1185.951430413" watchObservedRunningTime="2026-01-10 07:05:57.337571219 +0000 UTC m=+1185.953064102" Jan 10 07:05:58 crc kubenswrapper[4810]: I0110 07:05:58.331302 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"9e44de55965700aa2a1738d19841eec121311ffe479ed52bbdfd1f5cb3f32d12"} Jan 10 07:05:58 crc kubenswrapper[4810]: I0110 07:05:58.331347 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"9908ff9d26c56f81fb159d30e8893c01cecc049b1550089c99dc2b09e3b2d877"} Jan 10 07:05:58 crc kubenswrapper[4810]: I0110 07:05:58.331374 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"1346be94515cec4d9a1588a90a2a5a36e28d40d4541180769eb86196e569034c"} Jan 10 07:05:58 crc kubenswrapper[4810]: I0110 07:05:58.331384 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"8281cbc2a9e9c4f5e67f9e22afd89953cb5e28bb6166d94ca45efc00006596e3"} Jan 10 07:05:58 crc kubenswrapper[4810]: I0110 07:05:58.331394 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"010edfc64ef9a279855a87d8dad7fdf00b6587754f598c38885a8d0459a3db1a"} Jan 10 07:05:58 crc kubenswrapper[4810]: I0110 07:05:58.335659 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"eb8dcfa46566649bbb45366161cf2b4382878603181a69899d8b9f5d0229dea7"} Jan 10 07:05:58 crc kubenswrapper[4810]: I0110 07:05:58.335706 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"db4d766a67288250d2e6479d3f814524d13cf0c6aabc004fd92a733a99ec536d"} Jan 10 07:05:58 crc kubenswrapper[4810]: I0110 07:05:58.335717 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"10e517c56d4b6631859abbe184e61691ded46d983bb0a652e8316d88f670d7d9"} Jan 10 07:05:58 crc kubenswrapper[4810]: I0110 07:05:58.335726 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"0e4bb6a84a9dc7b325f43e70051e92694630534e6d3ee7346b870fc88190ee19"} Jan 10 07:05:58 crc kubenswrapper[4810]: I0110 07:05:58.335733 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"2a5cdbb179fb7e964bf55899f1b0a21819dbbdc619361dd13ad3f3864dd147d4"} Jan 10 07:05:59 crc kubenswrapper[4810]: I0110 07:05:59.361858 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"a86527ae03fad35b0b66ec650e019532b24a40310c0ad4f3fe8d3189fd355a6b"} Jan 10 07:05:59 crc kubenswrapper[4810]: I0110 07:05:59.362275 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"9f9fa47413eb8028943dac1ad3ba27db91e6cc8ebe9b05d4e80c48715a616d35"} Jan 10 07:05:59 crc kubenswrapper[4810]: I0110 07:05:59.362294 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"9512c7cbb62d9a8534478dd52e144a475c693b2308ca31b270d11cf60206c8b4"} Jan 10 07:05:59 crc kubenswrapper[4810]: I0110 07:05:59.368516 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"9bb4d63c1b3daa36e3a2ec65a2cedc4f5253601994fa95f8be2ee3c3bfe49778"} Jan 10 07:05:59 crc kubenswrapper[4810]: I0110 07:05:59.368665 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"106211d527b2e9f985ff2fd4b65dc5897b4c0b169e12af3b66cba966a1606788"} Jan 10 07:05:59 crc kubenswrapper[4810]: I0110 07:05:59.368770 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"dc920b856f0efd9bb7295730c5e1222e068efc741c239d2d5ddccf0cb1fe5adf"} Jan 10 07:06:00 crc kubenswrapper[4810]: I0110 07:06:00.385932 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"9b34af72bebe0899e2b088115878ab15e8816b704d05aa4e118dc0d46b829daa"} Jan 10 07:06:00 crc kubenswrapper[4810]: I0110 07:06:00.387443 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"326d29fd333a4f5fe159240b83a716cdb19340dd6eaf298128b2f9c09a746685"} Jan 10 07:06:00 crc kubenswrapper[4810]: I0110 07:06:00.393344 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"1eba0e6ff51cf1a4638dad271bbeab4fcb42ddcf9abb1bbe26bdf3478bd4349f"} Jan 10 07:06:00 crc kubenswrapper[4810]: I0110 07:06:00.393383 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"a8d6bbaeb07ad066ddfa58dcb1ad26125f59b9eca7ab00c23cf236cb96004a9c"} Jan 10 07:06:01 crc kubenswrapper[4810]: I0110 07:06:01.406641 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"ede53848c5d2a69668ba69d9910c04ea953b29d099a12afd9481731d122fb7f0"} Jan 10 07:06:01 crc kubenswrapper[4810]: I0110 07:06:01.406702 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerStarted","Data":"3df46909d6f39ae4d531e3640f55760d95af0e92dbf05a6001c7544567f31b3a"} Jan 10 07:06:01 crc kubenswrapper[4810]: I0110 07:06:01.413462 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"93216bde1118222264a1b0bf8b8454e979d8908ed225e502b73a864ed22746b4"} Jan 10 07:06:01 crc kubenswrapper[4810]: I0110 07:06:01.413530 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerStarted","Data":"9de95a21f8dbae911835b04d6bb9c08411a63ed592f2290198c531e6555ee3a4"} Jan 10 07:06:01 crc kubenswrapper[4810]: I0110 07:06:01.457381 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=8.45735804 podStartE2EDuration="8.45735804s" podCreationTimestamp="2026-01-10 07:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:06:01.452856232 +0000 UTC m=+1190.068349155" watchObservedRunningTime="2026-01-10 07:06:01.45735804 +0000 UTC m=+1190.072850933" Jan 10 07:06:01 crc kubenswrapper[4810]: I0110 07:06:01.506827 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-1" podStartSLOduration=8.506811188 podStartE2EDuration="8.506811188s" podCreationTimestamp="2026-01-10 07:05:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:06:01.497015292 +0000 UTC m=+1190.112508175" watchObservedRunningTime="2026-01-10 07:06:01.506811188 +0000 UTC m=+1190.122304071" Jan 10 07:06:06 crc kubenswrapper[4810]: I0110 07:06:06.476874 4810 generic.go:334] "Generic (PLEG): container finished" podID="d1939e73-ee85-4927-a954-7ca45c80767a" containerID="afe7d4c70a56c2bfed15eeca41c87d18bf47895f7ef6e1303daa21c5e17afc5b" exitCode=0 Jan 10 07:06:06 crc kubenswrapper[4810]: I0110 07:06:06.477025 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" event={"ID":"d1939e73-ee85-4927-a954-7ca45c80767a","Type":"ContainerDied","Data":"afe7d4c70a56c2bfed15eeca41c87d18bf47895f7ef6e1303daa21c5e17afc5b"} Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.783585 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.865079 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-ring-data-devices\") pod \"d1939e73-ee85-4927-a954-7ca45c80767a\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.865136 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-dispersionconf\") pod \"d1939e73-ee85-4927-a954-7ca45c80767a\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.865226 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vr2x\" (UniqueName: \"kubernetes.io/projected/d1939e73-ee85-4927-a954-7ca45c80767a-kube-api-access-7vr2x\") pod \"d1939e73-ee85-4927-a954-7ca45c80767a\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.865263 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1939e73-ee85-4927-a954-7ca45c80767a-etc-swift\") pod \"d1939e73-ee85-4927-a954-7ca45c80767a\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.865298 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-scripts\") pod \"d1939e73-ee85-4927-a954-7ca45c80767a\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.865351 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-swiftconf\") pod \"d1939e73-ee85-4927-a954-7ca45c80767a\" (UID: \"d1939e73-ee85-4927-a954-7ca45c80767a\") " Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.866034 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d1939e73-ee85-4927-a954-7ca45c80767a" (UID: "d1939e73-ee85-4927-a954-7ca45c80767a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.866549 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1939e73-ee85-4927-a954-7ca45c80767a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d1939e73-ee85-4927-a954-7ca45c80767a" (UID: "d1939e73-ee85-4927-a954-7ca45c80767a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.875006 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1939e73-ee85-4927-a954-7ca45c80767a-kube-api-access-7vr2x" (OuterVolumeSpecName: "kube-api-access-7vr2x") pod "d1939e73-ee85-4927-a954-7ca45c80767a" (UID: "d1939e73-ee85-4927-a954-7ca45c80767a"). InnerVolumeSpecName "kube-api-access-7vr2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.888061 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-scripts" (OuterVolumeSpecName: "scripts") pod "d1939e73-ee85-4927-a954-7ca45c80767a" (UID: "d1939e73-ee85-4927-a954-7ca45c80767a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.889638 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d1939e73-ee85-4927-a954-7ca45c80767a" (UID: "d1939e73-ee85-4927-a954-7ca45c80767a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.894882 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d1939e73-ee85-4927-a954-7ca45c80767a" (UID: "d1939e73-ee85-4927-a954-7ca45c80767a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.967064 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.967142 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.967165 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vr2x\" (UniqueName: \"kubernetes.io/projected/d1939e73-ee85-4927-a954-7ca45c80767a-kube-api-access-7vr2x\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.967186 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d1939e73-ee85-4927-a954-7ca45c80767a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.967249 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d1939e73-ee85-4927-a954-7ca45c80767a-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:07 crc kubenswrapper[4810]: I0110 07:06:07.967266 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d1939e73-ee85-4927-a954-7ca45c80767a-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.493187 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" event={"ID":"d1939e73-ee85-4927-a954-7ca45c80767a","Type":"ContainerDied","Data":"6582f8d26e222ff8dbcf8a5be9814da89faaa54f0872061594eb5115954c35fb"} Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.493684 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6582f8d26e222ff8dbcf8a5be9814da89faaa54f0872061594eb5115954c35fb" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.493791 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-mtptn" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.773901 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m"] Jan 10 07:06:08 crc kubenswrapper[4810]: E0110 07:06:08.774388 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1939e73-ee85-4927-a954-7ca45c80767a" containerName="swift-ring-rebalance" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.774406 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1939e73-ee85-4927-a954-7ca45c80767a" containerName="swift-ring-rebalance" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.774603 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1939e73-ee85-4927-a954-7ca45c80767a" containerName="swift-ring-rebalance" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.775175 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.778019 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.779619 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.784788 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m"] Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.882268 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-dispersionconf\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.882492 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-ring-data-devices\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.882629 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-swiftconf\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.882703 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmcms\" (UniqueName: \"kubernetes.io/projected/b06c8069-1e16-4d5f-82ae-8094e4d2644c-kube-api-access-mmcms\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.882993 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-scripts\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.883146 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b06c8069-1e16-4d5f-82ae-8094e4d2644c-etc-swift\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.985038 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-ring-data-devices\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.985122 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-swiftconf\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.985164 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmcms\" (UniqueName: \"kubernetes.io/projected/b06c8069-1e16-4d5f-82ae-8094e4d2644c-kube-api-access-mmcms\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.985323 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-scripts\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.985379 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b06c8069-1e16-4d5f-82ae-8094e4d2644c-etc-swift\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.985447 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-dispersionconf\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.986419 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b06c8069-1e16-4d5f-82ae-8094e4d2644c-etc-swift\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.986739 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-ring-data-devices\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.986952 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-scripts\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.990460 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-dispersionconf\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:08 crc kubenswrapper[4810]: I0110 07:06:08.990828 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-swiftconf\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:09 crc kubenswrapper[4810]: I0110 07:06:09.015302 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmcms\" (UniqueName: \"kubernetes.io/projected/b06c8069-1e16-4d5f-82ae-8094e4d2644c-kube-api-access-mmcms\") pod \"swift-ring-rebalance-debug-rjg9m\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:09 crc kubenswrapper[4810]: I0110 07:06:09.107030 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:09 crc kubenswrapper[4810]: I0110 07:06:09.621396 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m"] Jan 10 07:06:09 crc kubenswrapper[4810]: W0110 07:06:09.624328 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb06c8069_1e16_4d5f_82ae_8094e4d2644c.slice/crio-a2146638ba6e596869446b57a22081c314ca5c6779ac4eadfba1cf836ff77b28 WatchSource:0}: Error finding container a2146638ba6e596869446b57a22081c314ca5c6779ac4eadfba1cf836ff77b28: Status 404 returned error can't find the container with id a2146638ba6e596869446b57a22081c314ca5c6779ac4eadfba1cf836ff77b28 Jan 10 07:06:10 crc kubenswrapper[4810]: I0110 07:06:10.510726 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" event={"ID":"b06c8069-1e16-4d5f-82ae-8094e4d2644c","Type":"ContainerStarted","Data":"4213906ef0291c4466dfcbb3e3445a71e0f29c4f871c44b75a4c5729c59c19bc"} Jan 10 07:06:10 crc kubenswrapper[4810]: I0110 07:06:10.511044 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" event={"ID":"b06c8069-1e16-4d5f-82ae-8094e4d2644c","Type":"ContainerStarted","Data":"a2146638ba6e596869446b57a22081c314ca5c6779ac4eadfba1cf836ff77b28"} Jan 10 07:06:10 crc kubenswrapper[4810]: I0110 07:06:10.535322 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" podStartSLOduration=2.535296315 podStartE2EDuration="2.535296315s" podCreationTimestamp="2026-01-10 07:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:06:10.528954932 +0000 UTC m=+1199.144447815" watchObservedRunningTime="2026-01-10 07:06:10.535296315 +0000 UTC m=+1199.150789198" Jan 10 07:06:11 crc kubenswrapper[4810]: I0110 07:06:11.519627 4810 generic.go:334] "Generic (PLEG): container finished" podID="b06c8069-1e16-4d5f-82ae-8094e4d2644c" containerID="4213906ef0291c4466dfcbb3e3445a71e0f29c4f871c44b75a4c5729c59c19bc" exitCode=0 Jan 10 07:06:11 crc kubenswrapper[4810]: I0110 07:06:11.519677 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" event={"ID":"b06c8069-1e16-4d5f-82ae-8094e4d2644c","Type":"ContainerDied","Data":"4213906ef0291c4466dfcbb3e3445a71e0f29c4f871c44b75a4c5729c59c19bc"} Jan 10 07:06:12 crc kubenswrapper[4810]: I0110 07:06:12.879796 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:12 crc kubenswrapper[4810]: I0110 07:06:12.917266 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m"] Jan 10 07:06:12 crc kubenswrapper[4810]: I0110 07:06:12.927297 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m"] Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.054395 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmcms\" (UniqueName: \"kubernetes.io/projected/b06c8069-1e16-4d5f-82ae-8094e4d2644c-kube-api-access-mmcms\") pod \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.054456 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-swiftconf\") pod \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.054540 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b06c8069-1e16-4d5f-82ae-8094e4d2644c-etc-swift\") pod \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.054635 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-scripts\") pod \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.054694 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-dispersionconf\") pod \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.054732 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-ring-data-devices\") pod \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\" (UID: \"b06c8069-1e16-4d5f-82ae-8094e4d2644c\") " Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.055812 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b06c8069-1e16-4d5f-82ae-8094e4d2644c" (UID: "b06c8069-1e16-4d5f-82ae-8094e4d2644c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.056639 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b06c8069-1e16-4d5f-82ae-8094e4d2644c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b06c8069-1e16-4d5f-82ae-8094e4d2644c" (UID: "b06c8069-1e16-4d5f-82ae-8094e4d2644c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.067678 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06c8069-1e16-4d5f-82ae-8094e4d2644c-kube-api-access-mmcms" (OuterVolumeSpecName: "kube-api-access-mmcms") pod "b06c8069-1e16-4d5f-82ae-8094e4d2644c" (UID: "b06c8069-1e16-4d5f-82ae-8094e4d2644c"). InnerVolumeSpecName "kube-api-access-mmcms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.092776 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b06c8069-1e16-4d5f-82ae-8094e4d2644c" (UID: "b06c8069-1e16-4d5f-82ae-8094e4d2644c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.099986 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b06c8069-1e16-4d5f-82ae-8094e4d2644c" (UID: "b06c8069-1e16-4d5f-82ae-8094e4d2644c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.103086 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-scripts" (OuterVolumeSpecName: "scripts") pod "b06c8069-1e16-4d5f-82ae-8094e4d2644c" (UID: "b06c8069-1e16-4d5f-82ae-8094e4d2644c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.156830 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.156857 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.156871 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b06c8069-1e16-4d5f-82ae-8094e4d2644c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.156884 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmcms\" (UniqueName: \"kubernetes.io/projected/b06c8069-1e16-4d5f-82ae-8094e4d2644c-kube-api-access-mmcms\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.156897 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b06c8069-1e16-4d5f-82ae-8094e4d2644c-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.156907 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b06c8069-1e16-4d5f-82ae-8094e4d2644c-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.568544 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2146638ba6e596869446b57a22081c314ca5c6779ac4eadfba1cf836ff77b28" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.568577 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-rjg9m" Jan 10 07:06:13 crc kubenswrapper[4810]: I0110 07:06:13.701119 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06c8069-1e16-4d5f-82ae-8094e4d2644c" path="/var/lib/kubelet/pods/b06c8069-1e16-4d5f-82ae-8094e4d2644c/volumes" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.429861 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx"] Jan 10 07:06:14 crc kubenswrapper[4810]: E0110 07:06:14.432725 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06c8069-1e16-4d5f-82ae-8094e4d2644c" containerName="swift-ring-rebalance" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.432892 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06c8069-1e16-4d5f-82ae-8094e4d2644c" containerName="swift-ring-rebalance" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.435059 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06c8069-1e16-4d5f-82ae-8094e4d2644c" containerName="swift-ring-rebalance" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.435984 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.439988 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.441333 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.453577 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx"] Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.578356 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-scripts\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.578395 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsbv5\" (UniqueName: \"kubernetes.io/projected/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-kube-api-access-fsbv5\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.578419 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-etc-swift\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.578438 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-swiftconf\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.578575 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-dispersionconf\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.578651 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-ring-data-devices\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.679883 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-scripts\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.679928 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsbv5\" (UniqueName: \"kubernetes.io/projected/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-kube-api-access-fsbv5\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.679951 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-etc-swift\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.679971 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-swiftconf\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.680006 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-dispersionconf\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.680036 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-ring-data-devices\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.680703 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-ring-data-devices\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.681223 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-scripts\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.681697 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-etc-swift\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.689072 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-dispersionconf\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.699964 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-swiftconf\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.700372 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsbv5\" (UniqueName: \"kubernetes.io/projected/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-kube-api-access-fsbv5\") pod \"swift-ring-rebalance-debug-hsdwx\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:14 crc kubenswrapper[4810]: I0110 07:06:14.760314 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:15 crc kubenswrapper[4810]: I0110 07:06:15.039993 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx"] Jan 10 07:06:15 crc kubenswrapper[4810]: W0110 07:06:15.050473 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d2acfc7_4b6a_4cbf_8cf9_aa65b7449497.slice/crio-b8b5fa135596a9d32d1097fbb73b3dd3139a4c74a8f4a9e8568b6bd55f467779 WatchSource:0}: Error finding container b8b5fa135596a9d32d1097fbb73b3dd3139a4c74a8f4a9e8568b6bd55f467779: Status 404 returned error can't find the container with id b8b5fa135596a9d32d1097fbb73b3dd3139a4c74a8f4a9e8568b6bd55f467779 Jan 10 07:06:15 crc kubenswrapper[4810]: I0110 07:06:15.584455 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" event={"ID":"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497","Type":"ContainerStarted","Data":"f251f934da984a83daa95f1d7ebf48df29f34a0091743c4bd141c24b21008a3e"} Jan 10 07:06:15 crc kubenswrapper[4810]: I0110 07:06:15.584741 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" event={"ID":"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497","Type":"ContainerStarted","Data":"b8b5fa135596a9d32d1097fbb73b3dd3139a4c74a8f4a9e8568b6bd55f467779"} Jan 10 07:06:15 crc kubenswrapper[4810]: I0110 07:06:15.609726 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" podStartSLOduration=1.609702581 podStartE2EDuration="1.609702581s" podCreationTimestamp="2026-01-10 07:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:06:15.605660424 +0000 UTC m=+1204.221153317" watchObservedRunningTime="2026-01-10 07:06:15.609702581 +0000 UTC m=+1204.225195484" Jan 10 07:06:17 crc kubenswrapper[4810]: I0110 07:06:17.604093 4810 generic.go:334] "Generic (PLEG): container finished" podID="6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497" containerID="f251f934da984a83daa95f1d7ebf48df29f34a0091743c4bd141c24b21008a3e" exitCode=0 Jan 10 07:06:17 crc kubenswrapper[4810]: I0110 07:06:17.604133 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" event={"ID":"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497","Type":"ContainerDied","Data":"f251f934da984a83daa95f1d7ebf48df29f34a0091743c4bd141c24b21008a3e"} Jan 10 07:06:18 crc kubenswrapper[4810]: I0110 07:06:18.925122 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:18 crc kubenswrapper[4810]: I0110 07:06:18.954980 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx"] Jan 10 07:06:18 crc kubenswrapper[4810]: I0110 07:06:18.966000 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx"] Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.058689 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsbv5\" (UniqueName: \"kubernetes.io/projected/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-kube-api-access-fsbv5\") pod \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.058755 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-dispersionconf\") pod \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.058787 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-ring-data-devices\") pod \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.058834 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-swiftconf\") pod \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.058864 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-etc-swift\") pod \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.058900 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-scripts\") pod \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\" (UID: \"6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497\") " Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.060417 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497" (UID: "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.062676 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497" (UID: "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.069263 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.069965 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-server" containerID="cri-o://fe049bbd23ee68b3d9927ed8f5ba00b69a609bf1e0d7e52d8bc54ab506731b7a" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070338 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="swift-recon-cron" containerID="cri-o://8824c9a54a2c9217c99cdb159420c8131597b35d901c38d81c169a1941349229" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070385 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="rsync" containerID="cri-o://f958e6641185b495f45c8a6faf631ddeebcf22b1498068a61d1ca6dd437539e3" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070420 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-expirer" containerID="cri-o://7621375edfbec0a12c9033b1cd7a5898ef4065162ea91c44d0ad5d76343dcc36" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070452 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-updater" containerID="cri-o://064cf0033482b08b02beb362cc44504986ec69fa702709878b7dad655cae3a26" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070486 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-auditor" containerID="cri-o://23383170fe5100337b18f062e9a9b8cf955cc71256fd95623a318fef9f67313b" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070518 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-replicator" containerID="cri-o://bff692951863c7a90a2834a1bbae77bf426a3c39ba2bb08b8418474095658a09" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070550 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-server" containerID="cri-o://ac1e7fedff8ac508d556d8e7fe41d6c2484320ac090dbb7474a3704c31c76c5b" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070582 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-updater" containerID="cri-o://94459d7a51910321e0eb775de5dfa2e54714b78581becf8aa03039eed74b512e" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070612 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-auditor" containerID="cri-o://21b6efddaa7b34fe603e3371ee72d2ba574cf0669eacc560723d3d342413a01e" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070642 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-replicator" containerID="cri-o://128fd76c5dd360b4b795efa5f41c530eb21ed7c39a5de3eeed5522f286136f21" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070674 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-server" containerID="cri-o://18f6d65cd6ed6853502e72ba6cab5c950ef00d118dee5b3215d114acd010284a" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070704 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-reaper" containerID="cri-o://3260563e2c3b704cd8b15705b0ee92267bfad2eaba88fa8596c9371b5f5a13cd" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070732 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-auditor" containerID="cri-o://4b6d6a8a032838dd1d8cd2fbd8e0daa09d6a922cb568f35445591a7fc8f3d1f2" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.070760 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-replicator" containerID="cri-o://95c486fe8d34f8e3a1abfdfa11b5486e5df5de527c03cb589876f783d2808f62" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.074592 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-kube-api-access-fsbv5" (OuterVolumeSpecName: "kube-api-access-fsbv5") pod "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497" (UID: "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497"). InnerVolumeSpecName "kube-api-access-fsbv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.092432 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093025 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-server" containerID="cri-o://372c8eaafcd6ce6c9f2c2f62c996f9a11292af7ab244908e51f3b20940be2740" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093171 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="swift-recon-cron" containerID="cri-o://93216bde1118222264a1b0bf8b8454e979d8908ed225e502b73a864ed22746b4" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093256 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="rsync" containerID="cri-o://9de95a21f8dbae911835b04d6bb9c08411a63ed592f2290198c531e6555ee3a4" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093317 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-expirer" containerID="cri-o://1eba0e6ff51cf1a4638dad271bbeab4fcb42ddcf9abb1bbe26bdf3478bd4349f" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093461 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-replicator" containerID="cri-o://10e517c56d4b6631859abbe184e61691ded46d983bb0a652e8316d88f670d7d9" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093533 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-server" containerID="cri-o://0e4bb6a84a9dc7b325f43e70051e92694630534e6d3ee7346b870fc88190ee19" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093580 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-reaper" containerID="cri-o://2a5cdbb179fb7e964bf55899f1b0a21819dbbdc619361dd13ad3f3864dd147d4" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093632 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-auditor" containerID="cri-o://aff81c52ba040f2c55fcc06f5348292a947d3ea6768b22daae2544a468781d09" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093685 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-replicator" containerID="cri-o://a3c4a778920bdcfc162059546a353e135dcd2ea1b6a0a0884e4abaef4653d6aa" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093846 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-replicator" containerID="cri-o://106211d527b2e9f985ff2fd4b65dc5897b4c0b169e12af3b66cba966a1606788" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093918 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-updater" containerID="cri-o://a8d6bbaeb07ad066ddfa58dcb1ad26125f59b9eca7ab00c23cf236cb96004a9c" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093974 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-auditor" containerID="cri-o://9bb4d63c1b3daa36e3a2ec65a2cedc4f5253601994fa95f8be2ee3c3bfe49778" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.094423 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-server" containerID="cri-o://dc920b856f0efd9bb7295730c5e1222e068efc741c239d2d5ddccf0cb1fe5adf" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.094594 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-updater" containerID="cri-o://eb8dcfa46566649bbb45366161cf2b4382878603181a69899d8b9f5d0229dea7" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.093384 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-auditor" containerID="cri-o://db4d766a67288250d2e6479d3f814524d13cf0c6aabc004fd92a733a99ec536d" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.123355 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497" (UID: "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.140327 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497" (UID: "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.146416 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.146948 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-server" containerID="cri-o://8258765d91bdb5e3f54d64e10aec0b6b1cfe07a3ce580829e9a678bb13a19aed" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147365 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="swift-recon-cron" containerID="cri-o://ede53848c5d2a69668ba69d9910c04ea953b29d099a12afd9481731d122fb7f0" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147418 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="rsync" containerID="cri-o://3df46909d6f39ae4d531e3640f55760d95af0e92dbf05a6001c7544567f31b3a" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147452 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-expirer" containerID="cri-o://9b34af72bebe0899e2b088115878ab15e8816b704d05aa4e118dc0d46b829daa" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147487 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-updater" containerID="cri-o://326d29fd333a4f5fe159240b83a716cdb19340dd6eaf298128b2f9c09a746685" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147521 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-auditor" containerID="cri-o://a86527ae03fad35b0b66ec650e019532b24a40310c0ad4f3fe8d3189fd355a6b" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147550 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-replicator" containerID="cri-o://9f9fa47413eb8028943dac1ad3ba27db91e6cc8ebe9b05d4e80c48715a616d35" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147579 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-server" containerID="cri-o://9512c7cbb62d9a8534478dd52e144a475c693b2308ca31b270d11cf60206c8b4" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147609 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-updater" containerID="cri-o://9e44de55965700aa2a1738d19841eec121311ffe479ed52bbdfd1f5cb3f32d12" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147641 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-auditor" containerID="cri-o://9908ff9d26c56f81fb159d30e8893c01cecc049b1550089c99dc2b09e3b2d877" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147670 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-replicator" containerID="cri-o://1346be94515cec4d9a1588a90a2a5a36e28d40d4541180769eb86196e569034c" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147699 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-server" containerID="cri-o://8281cbc2a9e9c4f5e67f9e22afd89953cb5e28bb6166d94ca45efc00006596e3" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147739 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-reaper" containerID="cri-o://010edfc64ef9a279855a87d8dad7fdf00b6587754f598c38885a8d0459a3db1a" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147774 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-auditor" containerID="cri-o://3d8c21866fabab0d70e8e1d35b7b08821112e6fa634bb2c1b0f701b2b94119f8" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.147804 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-replicator" containerID="cri-o://8761d3461a532827e0ed249aaa7c00b8567a3d711e8a1c6092d396f95c336467" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.158441 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-mtptn"] Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.161015 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.161050 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsbv5\" (UniqueName: \"kubernetes.io/projected/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-kube-api-access-fsbv5\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.161063 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.161075 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.161086 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.164288 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-mtptn"] Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.168988 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-scripts" (OuterVolumeSpecName: "scripts") pod "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497" (UID: "6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.169779 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq"] Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.170118 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" podUID="e84852f5-9196-4304-8c33-83cfa9bfc818" containerName="proxy-httpd" containerID="cri-o://7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.170258 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" podUID="e84852f5-9196-4304-8c33-83cfa9bfc818" containerName="proxy-server" containerID="cri-o://d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6" gracePeriod=30 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.262449 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.629070 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="9de95a21f8dbae911835b04d6bb9c08411a63ed592f2290198c531e6555ee3a4" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.629307 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="1eba0e6ff51cf1a4638dad271bbeab4fcb42ddcf9abb1bbe26bdf3478bd4349f" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.629389 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="a8d6bbaeb07ad066ddfa58dcb1ad26125f59b9eca7ab00c23cf236cb96004a9c" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.629501 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="9bb4d63c1b3daa36e3a2ec65a2cedc4f5253601994fa95f8be2ee3c3bfe49778" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.629589 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="106211d527b2e9f985ff2fd4b65dc5897b4c0b169e12af3b66cba966a1606788" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.629687 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="dc920b856f0efd9bb7295730c5e1222e068efc741c239d2d5ddccf0cb1fe5adf" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.629759 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="eb8dcfa46566649bbb45366161cf2b4382878603181a69899d8b9f5d0229dea7" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.629838 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="db4d766a67288250d2e6479d3f814524d13cf0c6aabc004fd92a733a99ec536d" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.629167 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"9de95a21f8dbae911835b04d6bb9c08411a63ed592f2290198c531e6555ee3a4"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.629892 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="10e517c56d4b6631859abbe184e61691ded46d983bb0a652e8316d88f670d7d9" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.630025 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="0e4bb6a84a9dc7b325f43e70051e92694630534e6d3ee7346b870fc88190ee19" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.630142 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="2a5cdbb179fb7e964bf55899f1b0a21819dbbdc619361dd13ad3f3864dd147d4" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.630220 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="aff81c52ba040f2c55fcc06f5348292a947d3ea6768b22daae2544a468781d09" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.630308 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="a3c4a778920bdcfc162059546a353e135dcd2ea1b6a0a0884e4abaef4653d6aa" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.630391 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="372c8eaafcd6ce6c9f2c2f62c996f9a11292af7ab244908e51f3b20940be2740" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.629954 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"1eba0e6ff51cf1a4638dad271bbeab4fcb42ddcf9abb1bbe26bdf3478bd4349f"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.630652 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"a8d6bbaeb07ad066ddfa58dcb1ad26125f59b9eca7ab00c23cf236cb96004a9c"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.630738 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"9bb4d63c1b3daa36e3a2ec65a2cedc4f5253601994fa95f8be2ee3c3bfe49778"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.630825 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"106211d527b2e9f985ff2fd4b65dc5897b4c0b169e12af3b66cba966a1606788"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.630910 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"dc920b856f0efd9bb7295730c5e1222e068efc741c239d2d5ddccf0cb1fe5adf"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.630991 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"eb8dcfa46566649bbb45366161cf2b4382878603181a69899d8b9f5d0229dea7"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.631072 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"db4d766a67288250d2e6479d3f814524d13cf0c6aabc004fd92a733a99ec536d"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.631153 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"10e517c56d4b6631859abbe184e61691ded46d983bb0a652e8316d88f670d7d9"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.631239 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"0e4bb6a84a9dc7b325f43e70051e92694630534e6d3ee7346b870fc88190ee19"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.631331 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"2a5cdbb179fb7e964bf55899f1b0a21819dbbdc619361dd13ad3f3864dd147d4"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.631420 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"aff81c52ba040f2c55fcc06f5348292a947d3ea6768b22daae2544a468781d09"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.631524 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"a3c4a778920bdcfc162059546a353e135dcd2ea1b6a0a0884e4abaef4653d6aa"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.631622 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"372c8eaafcd6ce6c9f2c2f62c996f9a11292af7ab244908e51f3b20940be2740"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.632535 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8b5fa135596a9d32d1097fbb73b3dd3139a4c74a8f4a9e8568b6bd55f467779" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.632666 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-hsdwx" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638329 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="f958e6641185b495f45c8a6faf631ddeebcf22b1498068a61d1ca6dd437539e3" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638365 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="7621375edfbec0a12c9033b1cd7a5898ef4065162ea91c44d0ad5d76343dcc36" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638374 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="064cf0033482b08b02beb362cc44504986ec69fa702709878b7dad655cae3a26" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638387 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="23383170fe5100337b18f062e9a9b8cf955cc71256fd95623a318fef9f67313b" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638396 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="bff692951863c7a90a2834a1bbae77bf426a3c39ba2bb08b8418474095658a09" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638403 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="94459d7a51910321e0eb775de5dfa2e54714b78581becf8aa03039eed74b512e" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638411 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="21b6efddaa7b34fe603e3371ee72d2ba574cf0669eacc560723d3d342413a01e" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638405 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"f958e6641185b495f45c8a6faf631ddeebcf22b1498068a61d1ca6dd437539e3"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638451 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"7621375edfbec0a12c9033b1cd7a5898ef4065162ea91c44d0ad5d76343dcc36"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638466 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"064cf0033482b08b02beb362cc44504986ec69fa702709878b7dad655cae3a26"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638478 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"23383170fe5100337b18f062e9a9b8cf955cc71256fd95623a318fef9f67313b"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638489 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"bff692951863c7a90a2834a1bbae77bf426a3c39ba2bb08b8418474095658a09"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638500 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"94459d7a51910321e0eb775de5dfa2e54714b78581becf8aa03039eed74b512e"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638510 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"21b6efddaa7b34fe603e3371ee72d2ba574cf0669eacc560723d3d342413a01e"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638519 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"128fd76c5dd360b4b795efa5f41c530eb21ed7c39a5de3eeed5522f286136f21"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638420 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="128fd76c5dd360b4b795efa5f41c530eb21ed7c39a5de3eeed5522f286136f21" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638539 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="18f6d65cd6ed6853502e72ba6cab5c950ef00d118dee5b3215d114acd010284a" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638552 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="3260563e2c3b704cd8b15705b0ee92267bfad2eaba88fa8596c9371b5f5a13cd" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638561 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="4b6d6a8a032838dd1d8cd2fbd8e0daa09d6a922cb568f35445591a7fc8f3d1f2" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638570 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="95c486fe8d34f8e3a1abfdfa11b5486e5df5de527c03cb589876f783d2808f62" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638579 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="fe049bbd23ee68b3d9927ed8f5ba00b69a609bf1e0d7e52d8bc54ab506731b7a" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638626 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"18f6d65cd6ed6853502e72ba6cab5c950ef00d118dee5b3215d114acd010284a"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638643 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"3260563e2c3b704cd8b15705b0ee92267bfad2eaba88fa8596c9371b5f5a13cd"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638671 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"4b6d6a8a032838dd1d8cd2fbd8e0daa09d6a922cb568f35445591a7fc8f3d1f2"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638681 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"95c486fe8d34f8e3a1abfdfa11b5486e5df5de527c03cb589876f783d2808f62"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.638690 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"fe049bbd23ee68b3d9927ed8f5ba00b69a609bf1e0d7e52d8bc54ab506731b7a"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.640863 4810 generic.go:334] "Generic (PLEG): container finished" podID="e84852f5-9196-4304-8c33-83cfa9bfc818" containerID="7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.640926 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" event={"ID":"e84852f5-9196-4304-8c33-83cfa9bfc818","Type":"ContainerDied","Data":"7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.651356 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="9b34af72bebe0899e2b088115878ab15e8816b704d05aa4e118dc0d46b829daa" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.651472 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="326d29fd333a4f5fe159240b83a716cdb19340dd6eaf298128b2f9c09a746685" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.651537 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="a86527ae03fad35b0b66ec650e019532b24a40310c0ad4f3fe8d3189fd355a6b" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.651593 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="9f9fa47413eb8028943dac1ad3ba27db91e6cc8ebe9b05d4e80c48715a616d35" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.651650 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="9512c7cbb62d9a8534478dd52e144a475c693b2308ca31b270d11cf60206c8b4" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.651706 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="9e44de55965700aa2a1738d19841eec121311ffe479ed52bbdfd1f5cb3f32d12" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.651760 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="9908ff9d26c56f81fb159d30e8893c01cecc049b1550089c99dc2b09e3b2d877" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.651830 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="1346be94515cec4d9a1588a90a2a5a36e28d40d4541180769eb86196e569034c" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.651903 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="010edfc64ef9a279855a87d8dad7fdf00b6587754f598c38885a8d0459a3db1a" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.651967 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="3d8c21866fabab0d70e8e1d35b7b08821112e6fa634bb2c1b0f701b2b94119f8" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.652035 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="8761d3461a532827e0ed249aaa7c00b8567a3d711e8a1c6092d396f95c336467" exitCode=0 Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.652123 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"9b34af72bebe0899e2b088115878ab15e8816b704d05aa4e118dc0d46b829daa"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.652245 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"326d29fd333a4f5fe159240b83a716cdb19340dd6eaf298128b2f9c09a746685"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.652343 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"a86527ae03fad35b0b66ec650e019532b24a40310c0ad4f3fe8d3189fd355a6b"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.652422 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"9f9fa47413eb8028943dac1ad3ba27db91e6cc8ebe9b05d4e80c48715a616d35"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.652498 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"9512c7cbb62d9a8534478dd52e144a475c693b2308ca31b270d11cf60206c8b4"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.652580 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"9e44de55965700aa2a1738d19841eec121311ffe479ed52bbdfd1f5cb3f32d12"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.652664 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"9908ff9d26c56f81fb159d30e8893c01cecc049b1550089c99dc2b09e3b2d877"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.652916 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"1346be94515cec4d9a1588a90a2a5a36e28d40d4541180769eb86196e569034c"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.652987 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"010edfc64ef9a279855a87d8dad7fdf00b6587754f598c38885a8d0459a3db1a"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.653066 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"3d8c21866fabab0d70e8e1d35b7b08821112e6fa634bb2c1b0f701b2b94119f8"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.653146 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"8761d3461a532827e0ed249aaa7c00b8567a3d711e8a1c6092d396f95c336467"} Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.701003 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497" path="/var/lib/kubelet/pods/6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497/volumes" Jan 10 07:06:19 crc kubenswrapper[4810]: I0110 07:06:19.701682 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1939e73-ee85-4927-a954-7ca45c80767a" path="/var/lib/kubelet/pods/d1939e73-ee85-4927-a954-7ca45c80767a/volumes" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.111723 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.173326 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift\") pod \"e84852f5-9196-4304-8c33-83cfa9bfc818\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.173389 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-log-httpd\") pod \"e84852f5-9196-4304-8c33-83cfa9bfc818\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.173433 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84852f5-9196-4304-8c33-83cfa9bfc818-config-data\") pod \"e84852f5-9196-4304-8c33-83cfa9bfc818\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.173482 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2r44\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-kube-api-access-l2r44\") pod \"e84852f5-9196-4304-8c33-83cfa9bfc818\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.173529 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-run-httpd\") pod \"e84852f5-9196-4304-8c33-83cfa9bfc818\" (UID: \"e84852f5-9196-4304-8c33-83cfa9bfc818\") " Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.173952 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e84852f5-9196-4304-8c33-83cfa9bfc818" (UID: "e84852f5-9196-4304-8c33-83cfa9bfc818"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.174142 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e84852f5-9196-4304-8c33-83cfa9bfc818" (UID: "e84852f5-9196-4304-8c33-83cfa9bfc818"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.178971 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-kube-api-access-l2r44" (OuterVolumeSpecName: "kube-api-access-l2r44") pod "e84852f5-9196-4304-8c33-83cfa9bfc818" (UID: "e84852f5-9196-4304-8c33-83cfa9bfc818"). InnerVolumeSpecName "kube-api-access-l2r44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.199370 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e84852f5-9196-4304-8c33-83cfa9bfc818" (UID: "e84852f5-9196-4304-8c33-83cfa9bfc818"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.219225 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e84852f5-9196-4304-8c33-83cfa9bfc818-config-data" (OuterVolumeSpecName: "config-data") pod "e84852f5-9196-4304-8c33-83cfa9bfc818" (UID: "e84852f5-9196-4304-8c33-83cfa9bfc818"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.274728 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.274758 4810 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.274767 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e84852f5-9196-4304-8c33-83cfa9bfc818-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.274775 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2r44\" (UniqueName: \"kubernetes.io/projected/e84852f5-9196-4304-8c33-83cfa9bfc818-kube-api-access-l2r44\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.274784 4810 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e84852f5-9196-4304-8c33-83cfa9bfc818-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.662670 4810 generic.go:334] "Generic (PLEG): container finished" podID="e84852f5-9196-4304-8c33-83cfa9bfc818" containerID="d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6" exitCode=0 Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.662721 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.662770 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" event={"ID":"e84852f5-9196-4304-8c33-83cfa9bfc818","Type":"ContainerDied","Data":"d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6"} Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.662806 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq" event={"ID":"e84852f5-9196-4304-8c33-83cfa9bfc818","Type":"ContainerDied","Data":"a310c23f2a8fd33365ee278b195092f4618486f04b218651ad91876b48c79e63"} Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.662830 4810 scope.go:117] "RemoveContainer" containerID="d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.673994 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="3df46909d6f39ae4d531e3640f55760d95af0e92dbf05a6001c7544567f31b3a" exitCode=0 Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.674294 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="8281cbc2a9e9c4f5e67f9e22afd89953cb5e28bb6166d94ca45efc00006596e3" exitCode=0 Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.674438 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="8258765d91bdb5e3f54d64e10aec0b6b1cfe07a3ce580829e9a678bb13a19aed" exitCode=0 Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.674103 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"3df46909d6f39ae4d531e3640f55760d95af0e92dbf05a6001c7544567f31b3a"} Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.674696 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"8281cbc2a9e9c4f5e67f9e22afd89953cb5e28bb6166d94ca45efc00006596e3"} Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.674787 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"8258765d91bdb5e3f54d64e10aec0b6b1cfe07a3ce580829e9a678bb13a19aed"} Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.686840 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="ac1e7fedff8ac508d556d8e7fe41d6c2484320ac090dbb7474a3704c31c76c5b" exitCode=0 Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.686874 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"ac1e7fedff8ac508d556d8e7fe41d6c2484320ac090dbb7474a3704c31c76c5b"} Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.700180 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq"] Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.707495 4810 scope.go:117] "RemoveContainer" containerID="7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.711088 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-mj6nq"] Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.747680 4810 scope.go:117] "RemoveContainer" containerID="d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6" Jan 10 07:06:20 crc kubenswrapper[4810]: E0110 07:06:20.748528 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6\": container with ID starting with d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6 not found: ID does not exist" containerID="d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.748567 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6"} err="failed to get container status \"d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6\": rpc error: code = NotFound desc = could not find container \"d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6\": container with ID starting with d76686c76dc6556956e92752544c2b1c74a57b7fc40cb28a5887345655c845b6 not found: ID does not exist" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.748593 4810 scope.go:117] "RemoveContainer" containerID="7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343" Jan 10 07:06:20 crc kubenswrapper[4810]: E0110 07:06:20.748971 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343\": container with ID starting with 7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343 not found: ID does not exist" containerID="7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.748989 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343"} err="failed to get container status \"7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343\": rpc error: code = NotFound desc = could not find container \"7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343\": container with ID starting with 7c649a9a8f1a228c049b3b16c52f99158178222b09a0741609ec598696249343 not found: ID does not exist" Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.882968 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:06:20 crc kubenswrapper[4810]: I0110 07:06:20.883039 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:06:21 crc kubenswrapper[4810]: I0110 07:06:21.705285 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e84852f5-9196-4304-8c33-83cfa9bfc818" path="/var/lib/kubelet/pods/e84852f5-9196-4304-8c33-83cfa9bfc818/volumes" Jan 10 07:06:49 crc kubenswrapper[4810]: I0110 07:06:49.965611 4810 generic.go:334] "Generic (PLEG): container finished" podID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerID="ede53848c5d2a69668ba69d9910c04ea953b29d099a12afd9481731d122fb7f0" exitCode=137 Jan 10 07:06:49 crc kubenswrapper[4810]: I0110 07:06:49.965712 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"ede53848c5d2a69668ba69d9910c04ea953b29d099a12afd9481731d122fb7f0"} Jan 10 07:06:49 crc kubenswrapper[4810]: I0110 07:06:49.985121 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerID="93216bde1118222264a1b0bf8b8454e979d8908ed225e502b73a864ed22746b4" exitCode=137 Jan 10 07:06:49 crc kubenswrapper[4810]: I0110 07:06:49.985222 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"93216bde1118222264a1b0bf8b8454e979d8908ed225e502b73a864ed22746b4"} Jan 10 07:06:49 crc kubenswrapper[4810]: I0110 07:06:49.994624 4810 generic.go:334] "Generic (PLEG): container finished" podID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerID="8824c9a54a2c9217c99cdb159420c8131597b35d901c38d81c169a1941349229" exitCode=137 Jan 10 07:06:49 crc kubenswrapper[4810]: I0110 07:06:49.994689 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"8824c9a54a2c9217c99cdb159420c8131597b35d901c38d81c169a1941349229"} Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.098782 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.101596 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.120679 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.181921 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmqj9\" (UniqueName: \"kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-kube-api-access-rmqj9\") pod \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.181967 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-etc-swift\") pod \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182027 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2726r\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-kube-api-access-2726r\") pod \"a75358c5-e04b-409e-8ba5-c5e184d656b1\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182059 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-etc-swift\") pod \"e59b90ce-9a68-4262-8eca-90acf601a7fe\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182088 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"e59b90ce-9a68-4262-8eca-90acf601a7fe\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182114 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-cache\") pod \"a75358c5-e04b-409e-8ba5-c5e184d656b1\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182135 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-lock\") pod \"a75358c5-e04b-409e-8ba5-c5e184d656b1\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182158 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"a75358c5-e04b-409e-8ba5-c5e184d656b1\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182183 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcwjj\" (UniqueName: \"kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-kube-api-access-vcwjj\") pod \"e59b90ce-9a68-4262-8eca-90acf601a7fe\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182224 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182252 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-cache\") pod \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182284 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift\") pod \"a75358c5-e04b-409e-8ba5-c5e184d656b1\" (UID: \"a75358c5-e04b-409e-8ba5-c5e184d656b1\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182331 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-lock\") pod \"e59b90ce-9a68-4262-8eca-90acf601a7fe\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182376 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-lock\") pod \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\" (UID: \"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.182400 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-cache\") pod \"e59b90ce-9a68-4262-8eca-90acf601a7fe\" (UID: \"e59b90ce-9a68-4262-8eca-90acf601a7fe\") " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.183115 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-cache" (OuterVolumeSpecName: "cache") pod "e59b90ce-9a68-4262-8eca-90acf601a7fe" (UID: "e59b90ce-9a68-4262-8eca-90acf601a7fe"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.183960 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-cache" (OuterVolumeSpecName: "cache") pod "a75358c5-e04b-409e-8ba5-c5e184d656b1" (UID: "a75358c5-e04b-409e-8ba5-c5e184d656b1"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.185909 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-lock" (OuterVolumeSpecName: "lock") pod "a75358c5-e04b-409e-8ba5-c5e184d656b1" (UID: "a75358c5-e04b-409e-8ba5-c5e184d656b1"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.185942 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-lock" (OuterVolumeSpecName: "lock") pod "e59b90ce-9a68-4262-8eca-90acf601a7fe" (UID: "e59b90ce-9a68-4262-8eca-90acf601a7fe"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.185945 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-cache" (OuterVolumeSpecName: "cache") pod "9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" (UID: "9e3bd11f-41c4-4d7e-a4db-feb15bf853b7"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.186316 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-lock" (OuterVolumeSpecName: "lock") pod "9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" (UID: "9e3bd11f-41c4-4d7e-a4db-feb15bf853b7"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.188570 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "swift") pod "e59b90ce-9a68-4262-8eca-90acf601a7fe" (UID: "e59b90ce-9a68-4262-8eca-90acf601a7fe"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.198672 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" (UID: "9e3bd11f-41c4-4d7e-a4db-feb15bf853b7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.198670 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-kube-api-access-vcwjj" (OuterVolumeSpecName: "kube-api-access-vcwjj") pod "e59b90ce-9a68-4262-8eca-90acf601a7fe" (UID: "e59b90ce-9a68-4262-8eca-90acf601a7fe"). InnerVolumeSpecName "kube-api-access-vcwjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.198775 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" (UID: "9e3bd11f-41c4-4d7e-a4db-feb15bf853b7"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.198857 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "e59b90ce-9a68-4262-8eca-90acf601a7fe" (UID: "e59b90ce-9a68-4262-8eca-90acf601a7fe"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.198619 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-kube-api-access-rmqj9" (OuterVolumeSpecName: "kube-api-access-rmqj9") pod "9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" (UID: "9e3bd11f-41c4-4d7e-a4db-feb15bf853b7"). InnerVolumeSpecName "kube-api-access-rmqj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.202137 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a75358c5-e04b-409e-8ba5-c5e184d656b1" (UID: "a75358c5-e04b-409e-8ba5-c5e184d656b1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.204994 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-kube-api-access-2726r" (OuterVolumeSpecName: "kube-api-access-2726r") pod "a75358c5-e04b-409e-8ba5-c5e184d656b1" (UID: "a75358c5-e04b-409e-8ba5-c5e184d656b1"). InnerVolumeSpecName "kube-api-access-2726r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.205444 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "swift") pod "a75358c5-e04b-409e-8ba5-c5e184d656b1" (UID: "a75358c5-e04b-409e-8ba5-c5e184d656b1"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284694 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmqj9\" (UniqueName: \"kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-kube-api-access-rmqj9\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284735 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284749 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2726r\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-kube-api-access-2726r\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284761 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284800 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284812 4810 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-cache\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284824 4810 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a75358c5-e04b-409e-8ba5-c5e184d656b1-lock\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284841 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284853 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcwjj\" (UniqueName: \"kubernetes.io/projected/e59b90ce-9a68-4262-8eca-90acf601a7fe-kube-api-access-vcwjj\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284869 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284881 4810 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-cache\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284892 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a75358c5-e04b-409e-8ba5-c5e184d656b1-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284903 4810 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-lock\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284913 4810 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7-lock\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.284923 4810 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e59b90ce-9a68-4262-8eca-90acf601a7fe-cache\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.295760 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.299588 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.301411 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.386333 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.386361 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.386370 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.882630 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:06:50 crc kubenswrapper[4810]: I0110 07:06:50.882714 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.015009 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"9e3bd11f-41c4-4d7e-a4db-feb15bf853b7","Type":"ContainerDied","Data":"c45487bafb8409dfac207b9a98e6602d7fd6ad8a19d219c3eb7d40af0a97930f"} Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.015088 4810 scope.go:117] "RemoveContainer" containerID="93216bde1118222264a1b0bf8b8454e979d8908ed225e502b73a864ed22746b4" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.015164 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.028941 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.029447 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"a75358c5-e04b-409e-8ba5-c5e184d656b1","Type":"ContainerDied","Data":"095ba6a7faddbd38dd9ca9ab21a5e56da98a7dcd8fc91a24e92a74a7edcceacc"} Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.042040 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"e59b90ce-9a68-4262-8eca-90acf601a7fe","Type":"ContainerDied","Data":"2f65f2426935bea8f66a62c7b7887cce2cfaef46c7b6fbced559dd3201a6b14b"} Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.042226 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.046655 4810 scope.go:117] "RemoveContainer" containerID="9de95a21f8dbae911835b04d6bb9c08411a63ed592f2290198c531e6555ee3a4" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.073699 4810 scope.go:117] "RemoveContainer" containerID="1eba0e6ff51cf1a4638dad271bbeab4fcb42ddcf9abb1bbe26bdf3478bd4349f" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.079545 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.092231 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.102876 4810 scope.go:117] "RemoveContainer" containerID="a8d6bbaeb07ad066ddfa58dcb1ad26125f59b9eca7ab00c23cf236cb96004a9c" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.103009 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.112033 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.120463 4810 scope.go:117] "RemoveContainer" containerID="9bb4d63c1b3daa36e3a2ec65a2cedc4f5253601994fa95f8be2ee3c3bfe49778" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.122761 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.129938 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.142046 4810 scope.go:117] "RemoveContainer" containerID="106211d527b2e9f985ff2fd4b65dc5897b4c0b169e12af3b66cba966a1606788" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.155794 4810 scope.go:117] "RemoveContainer" containerID="dc920b856f0efd9bb7295730c5e1222e068efc741c239d2d5ddccf0cb1fe5adf" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.169312 4810 scope.go:117] "RemoveContainer" containerID="eb8dcfa46566649bbb45366161cf2b4382878603181a69899d8b9f5d0229dea7" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.186356 4810 scope.go:117] "RemoveContainer" containerID="db4d766a67288250d2e6479d3f814524d13cf0c6aabc004fd92a733a99ec536d" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.202901 4810 scope.go:117] "RemoveContainer" containerID="10e517c56d4b6631859abbe184e61691ded46d983bb0a652e8316d88f670d7d9" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.220698 4810 scope.go:117] "RemoveContainer" containerID="0e4bb6a84a9dc7b325f43e70051e92694630534e6d3ee7346b870fc88190ee19" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.239062 4810 scope.go:117] "RemoveContainer" containerID="2a5cdbb179fb7e964bf55899f1b0a21819dbbdc619361dd13ad3f3864dd147d4" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.256225 4810 scope.go:117] "RemoveContainer" containerID="aff81c52ba040f2c55fcc06f5348292a947d3ea6768b22daae2544a468781d09" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.269956 4810 scope.go:117] "RemoveContainer" containerID="a3c4a778920bdcfc162059546a353e135dcd2ea1b6a0a0884e4abaef4653d6aa" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.288035 4810 scope.go:117] "RemoveContainer" containerID="372c8eaafcd6ce6c9f2c2f62c996f9a11292af7ab244908e51f3b20940be2740" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.312033 4810 scope.go:117] "RemoveContainer" containerID="8824c9a54a2c9217c99cdb159420c8131597b35d901c38d81c169a1941349229" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.328553 4810 scope.go:117] "RemoveContainer" containerID="f958e6641185b495f45c8a6faf631ddeebcf22b1498068a61d1ca6dd437539e3" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.342045 4810 scope.go:117] "RemoveContainer" containerID="7621375edfbec0a12c9033b1cd7a5898ef4065162ea91c44d0ad5d76343dcc36" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.356132 4810 scope.go:117] "RemoveContainer" containerID="064cf0033482b08b02beb362cc44504986ec69fa702709878b7dad655cae3a26" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.376831 4810 scope.go:117] "RemoveContainer" containerID="23383170fe5100337b18f062e9a9b8cf955cc71256fd95623a318fef9f67313b" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.393909 4810 scope.go:117] "RemoveContainer" containerID="bff692951863c7a90a2834a1bbae77bf426a3c39ba2bb08b8418474095658a09" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.409843 4810 scope.go:117] "RemoveContainer" containerID="ac1e7fedff8ac508d556d8e7fe41d6c2484320ac090dbb7474a3704c31c76c5b" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.430999 4810 scope.go:117] "RemoveContainer" containerID="94459d7a51910321e0eb775de5dfa2e54714b78581becf8aa03039eed74b512e" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.447466 4810 scope.go:117] "RemoveContainer" containerID="21b6efddaa7b34fe603e3371ee72d2ba574cf0669eacc560723d3d342413a01e" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.478755 4810 scope.go:117] "RemoveContainer" containerID="128fd76c5dd360b4b795efa5f41c530eb21ed7c39a5de3eeed5522f286136f21" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.509813 4810 scope.go:117] "RemoveContainer" containerID="18f6d65cd6ed6853502e72ba6cab5c950ef00d118dee5b3215d114acd010284a" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.534364 4810 scope.go:117] "RemoveContainer" containerID="3260563e2c3b704cd8b15705b0ee92267bfad2eaba88fa8596c9371b5f5a13cd" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.559913 4810 scope.go:117] "RemoveContainer" containerID="4b6d6a8a032838dd1d8cd2fbd8e0daa09d6a922cb568f35445591a7fc8f3d1f2" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.584877 4810 scope.go:117] "RemoveContainer" containerID="95c486fe8d34f8e3a1abfdfa11b5486e5df5de527c03cb589876f783d2808f62" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.613667 4810 scope.go:117] "RemoveContainer" containerID="fe049bbd23ee68b3d9927ed8f5ba00b69a609bf1e0d7e52d8bc54ab506731b7a" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.639492 4810 scope.go:117] "RemoveContainer" containerID="ede53848c5d2a69668ba69d9910c04ea953b29d099a12afd9481731d122fb7f0" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.662257 4810 scope.go:117] "RemoveContainer" containerID="3df46909d6f39ae4d531e3640f55760d95af0e92dbf05a6001c7544567f31b3a" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.693098 4810 scope.go:117] "RemoveContainer" containerID="9b34af72bebe0899e2b088115878ab15e8816b704d05aa4e118dc0d46b829daa" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.702997 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" path="/var/lib/kubelet/pods/9e3bd11f-41c4-4d7e-a4db-feb15bf853b7/volumes" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.705861 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" path="/var/lib/kubelet/pods/a75358c5-e04b-409e-8ba5-c5e184d656b1/volumes" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.708178 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" path="/var/lib/kubelet/pods/e59b90ce-9a68-4262-8eca-90acf601a7fe/volumes" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.727166 4810 scope.go:117] "RemoveContainer" containerID="326d29fd333a4f5fe159240b83a716cdb19340dd6eaf298128b2f9c09a746685" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.753819 4810 scope.go:117] "RemoveContainer" containerID="a86527ae03fad35b0b66ec650e019532b24a40310c0ad4f3fe8d3189fd355a6b" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.778860 4810 scope.go:117] "RemoveContainer" containerID="9f9fa47413eb8028943dac1ad3ba27db91e6cc8ebe9b05d4e80c48715a616d35" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.805173 4810 scope.go:117] "RemoveContainer" containerID="9512c7cbb62d9a8534478dd52e144a475c693b2308ca31b270d11cf60206c8b4" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.833116 4810 scope.go:117] "RemoveContainer" containerID="9e44de55965700aa2a1738d19841eec121311ffe479ed52bbdfd1f5cb3f32d12" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.860454 4810 scope.go:117] "RemoveContainer" containerID="9908ff9d26c56f81fb159d30e8893c01cecc049b1550089c99dc2b09e3b2d877" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.885403 4810 scope.go:117] "RemoveContainer" containerID="1346be94515cec4d9a1588a90a2a5a36e28d40d4541180769eb86196e569034c" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.904351 4810 scope.go:117] "RemoveContainer" containerID="8281cbc2a9e9c4f5e67f9e22afd89953cb5e28bb6166d94ca45efc00006596e3" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.925718 4810 scope.go:117] "RemoveContainer" containerID="010edfc64ef9a279855a87d8dad7fdf00b6587754f598c38885a8d0459a3db1a" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.947003 4810 scope.go:117] "RemoveContainer" containerID="3d8c21866fabab0d70e8e1d35b7b08821112e6fa634bb2c1b0f701b2b94119f8" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.965832 4810 scope.go:117] "RemoveContainer" containerID="8761d3461a532827e0ed249aaa7c00b8567a3d711e8a1c6092d396f95c336467" Jan 10 07:06:51 crc kubenswrapper[4810]: I0110 07:06:51.994548 4810 scope.go:117] "RemoveContainer" containerID="8258765d91bdb5e3f54d64e10aec0b6b1cfe07a3ce580829e9a678bb13a19aed" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.696298 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697018 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697040 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697056 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697068 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-server" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697083 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697096 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697109 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84852f5-9196-4304-8c33-83cfa9bfc818" containerName="proxy-httpd" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697121 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84852f5-9196-4304-8c33-83cfa9bfc818" containerName="proxy-httpd" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697135 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697146 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-server" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697171 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-expirer" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697183 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-expirer" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697222 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497" containerName="swift-ring-rebalance" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697234 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497" containerName="swift-ring-rebalance" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697253 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697265 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697287 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="rsync" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697298 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="rsync" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697315 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697328 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-server" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697343 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697355 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697375 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697388 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-server" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697410 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="swift-recon-cron" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697422 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="swift-recon-cron" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697444 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697456 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697473 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-reaper" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697484 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-reaper" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697507 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697519 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697535 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697547 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697561 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-reaper" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697573 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-reaper" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697592 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697603 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-server" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697617 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697629 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697648 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697659 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697675 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="swift-recon-cron" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697688 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="swift-recon-cron" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697711 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697722 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697738 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-expirer" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697749 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-expirer" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697763 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="rsync" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697774 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="rsync" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697793 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697809 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697827 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697839 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697859 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697871 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697889 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697901 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697920 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697932 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697949 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697960 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.697979 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.697990 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-server" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698006 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698018 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698033 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698045 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-server" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698062 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698074 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-server" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698090 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698102 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698123 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698134 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698150 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698163 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698180 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="rsync" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698217 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="rsync" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698234 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698246 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698260 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698273 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698285 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-reaper" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698297 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-reaper" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698314 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84852f5-9196-4304-8c33-83cfa9bfc818" containerName="proxy-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698327 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84852f5-9196-4304-8c33-83cfa9bfc818" containerName="proxy-server" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698343 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698356 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-server" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698371 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698383 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698398 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="swift-recon-cron" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698409 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="swift-recon-cron" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698421 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698433 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.698451 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-expirer" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698462 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-expirer" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698668 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698690 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698708 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698725 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698736 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698752 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="rsync" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698771 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698787 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698803 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698818 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698837 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698855 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-expirer" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698869 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="rsync" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698885 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84852f5-9196-4304-8c33-83cfa9bfc818" containerName="proxy-httpd" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698897 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698912 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698931 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698951 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-reaper" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698971 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-reaper" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.698989 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-expirer" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699008 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="account-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699021 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699034 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699048 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84852f5-9196-4304-8c33-83cfa9bfc818" containerName="proxy-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699065 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699078 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699095 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699108 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699125 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="container-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699143 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699158 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699175 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="swift-recon-cron" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699218 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-reaper" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699235 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-auditor" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699254 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="account-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699271 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-expirer" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699284 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="container-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699298 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d2acfc7-4b6a-4cbf-8cf9-aa65b7449497" containerName="swift-ring-rebalance" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699312 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-updater" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699325 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="account-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699343 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-server" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699358 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="rsync" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699375 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="container-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699391 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e3bd11f-41c4-4d7e-a4db-feb15bf853b7" containerName="object-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699404 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="object-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699416 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a75358c5-e04b-409e-8ba5-c5e184d656b1" containerName="swift-recon-cron" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699434 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="swift-recon-cron" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.699449 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59b90ce-9a68-4262-8eca-90acf601a7fe" containerName="object-replicator" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.706479 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.709409 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-dw5cm" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.710069 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.711763 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.713391 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.734671 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.855260 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52kzr\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-kube-api-access-52kzr\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.855311 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.855352 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-lock\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.855599 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.855706 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-cache\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.957110 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.957240 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52kzr\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-kube-api-access-52kzr\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.957302 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-lock\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.957383 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.957416 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.957467 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:06:54 crc kubenswrapper[4810]: E0110 07:06:54.957560 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift podName:b2d79eee-543f-453a-80a5-2c7b4f072992 nodeName:}" failed. No retries permitted until 2026-01-10 07:06:55.457524196 +0000 UTC m=+1244.073017109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift") pod "swift-storage-0" (UID: "b2d79eee-543f-453a-80a5-2c7b4f072992") : configmap "swift-ring-files" not found Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.957469 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-cache\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.957976 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") device mount path \"/mnt/openstack/pv06\"" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.958173 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-cache\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.958241 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-lock\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.993600 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52kzr\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-kube-api-access-52kzr\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:54 crc kubenswrapper[4810]: I0110 07:06:54.995726 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:55 crc kubenswrapper[4810]: I0110 07:06:55.466297 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:55 crc kubenswrapper[4810]: E0110 07:06:55.466528 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:06:55 crc kubenswrapper[4810]: E0110 07:06:55.466551 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:06:55 crc kubenswrapper[4810]: E0110 07:06:55.466627 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift podName:b2d79eee-543f-453a-80a5-2c7b4f072992 nodeName:}" failed. No retries permitted until 2026-01-10 07:06:56.466604486 +0000 UTC m=+1245.082097409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift") pod "swift-storage-0" (UID: "b2d79eee-543f-453a-80a5-2c7b4f072992") : configmap "swift-ring-files" not found Jan 10 07:06:56 crc kubenswrapper[4810]: I0110 07:06:56.484089 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:56 crc kubenswrapper[4810]: E0110 07:06:56.484398 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:06:56 crc kubenswrapper[4810]: E0110 07:06:56.484573 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:06:56 crc kubenswrapper[4810]: E0110 07:06:56.484649 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift podName:b2d79eee-543f-453a-80a5-2c7b4f072992 nodeName:}" failed. No retries permitted until 2026-01-10 07:06:58.484623953 +0000 UTC m=+1247.100116876 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift") pod "swift-storage-0" (UID: "b2d79eee-543f-453a-80a5-2c7b4f072992") : configmap "swift-ring-files" not found Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.438834 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-p8ds4"] Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.440228 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.442530 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.443089 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.443974 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.474818 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-p8ds4"] Jan 10 07:06:58 crc kubenswrapper[4810]: E0110 07:06:58.475464 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[dispersionconf etc-swift kube-api-access-dfcdl ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[dispersionconf etc-swift kube-api-access-dfcdl ring-data-devices scripts swiftconf]: context canceled" pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" podUID="6d663717-3c3f-4ac3-b5f4-63b9f2db9c68" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.483905 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-gcgk6"] Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.484883 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.513849 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-gcgk6"] Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.516833 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcdl\" (UniqueName: \"kubernetes.io/projected/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-kube-api-access-dfcdl\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.516872 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-etc-swift\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.516902 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-scripts\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.516941 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.516969 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-dispersionconf\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.516993 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-swiftconf\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.517060 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-ring-data-devices\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: E0110 07:06:58.517286 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:06:58 crc kubenswrapper[4810]: E0110 07:06:58.517307 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:06:58 crc kubenswrapper[4810]: E0110 07:06:58.517348 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift podName:b2d79eee-543f-453a-80a5-2c7b4f072992 nodeName:}" failed. No retries permitted until 2026-01-10 07:07:02.517332238 +0000 UTC m=+1251.132825131 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift") pod "swift-storage-0" (UID: "b2d79eee-543f-453a-80a5-2c7b4f072992") : configmap "swift-ring-files" not found Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.525560 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-p8ds4"] Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.618656 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-dispersionconf\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.619013 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-scripts\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.619044 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-ring-data-devices\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.619109 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-ring-data-devices\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.619132 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a0139da-f5bb-42bc-aba9-d6dc692704a1-etc-swift\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.619159 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xdrq\" (UniqueName: \"kubernetes.io/projected/7a0139da-f5bb-42bc-aba9-d6dc692704a1-kube-api-access-7xdrq\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.619214 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcdl\" (UniqueName: \"kubernetes.io/projected/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-kube-api-access-dfcdl\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.619239 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-etc-swift\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.619261 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-scripts\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.619302 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-swiftconf\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.619323 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-dispersionconf\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.619345 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-swiftconf\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.620734 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-ring-data-devices\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.621272 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-etc-swift\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.621428 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-scripts\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.630752 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-dispersionconf\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.644828 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-swiftconf\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.649791 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcdl\" (UniqueName: \"kubernetes.io/projected/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-kube-api-access-dfcdl\") pod \"swift-ring-rebalance-p8ds4\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.721014 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-swiftconf\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.721114 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-dispersionconf\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.721190 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-scripts\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.721330 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-ring-data-devices\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.721377 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a0139da-f5bb-42bc-aba9-d6dc692704a1-etc-swift\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.721442 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xdrq\" (UniqueName: \"kubernetes.io/projected/7a0139da-f5bb-42bc-aba9-d6dc692704a1-kube-api-access-7xdrq\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.722274 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a0139da-f5bb-42bc-aba9-d6dc692704a1-etc-swift\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.722578 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-ring-data-devices\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.723383 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-scripts\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.727981 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-swiftconf\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.730536 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-dispersionconf\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.755934 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xdrq\" (UniqueName: \"kubernetes.io/projected/7a0139da-f5bb-42bc-aba9-d6dc692704a1-kube-api-access-7xdrq\") pod \"swift-ring-rebalance-gcgk6\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:58 crc kubenswrapper[4810]: I0110 07:06:58.800825 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.150240 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.163500 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.237128 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-etc-swift\") pod \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.237247 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-scripts\") pod \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.237349 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-dispersionconf\") pod \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.237430 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfcdl\" (UniqueName: \"kubernetes.io/projected/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-kube-api-access-dfcdl\") pod \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.237663 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-ring-data-devices\") pod \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.237731 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-swiftconf\") pod \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\" (UID: \"6d663717-3c3f-4ac3-b5f4-63b9f2db9c68\") " Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.239406 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68" (UID: "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.239385 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-scripts" (OuterVolumeSpecName: "scripts") pod "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68" (UID: "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.240188 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68" (UID: "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.243245 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68" (UID: "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.243556 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-kube-api-access-dfcdl" (OuterVolumeSpecName: "kube-api-access-dfcdl") pod "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68" (UID: "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68"). InnerVolumeSpecName "kube-api-access-dfcdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.244884 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68" (UID: "6d663717-3c3f-4ac3-b5f4-63b9f2db9c68"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.338078 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-gcgk6"] Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.340111 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfcdl\" (UniqueName: \"kubernetes.io/projected/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-kube-api-access-dfcdl\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.340154 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.340174 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.340217 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.340239 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:59 crc kubenswrapper[4810]: I0110 07:06:59.340256 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:06:59 crc kubenswrapper[4810]: W0110 07:06:59.345087 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a0139da_f5bb_42bc_aba9_d6dc692704a1.slice/crio-9de355af30aa962dc7184744f2618dd583541b64f03a36ca600e2a70b49492f1 WatchSource:0}: Error finding container 9de355af30aa962dc7184744f2618dd583541b64f03a36ca600e2a70b49492f1: Status 404 returned error can't find the container with id 9de355af30aa962dc7184744f2618dd583541b64f03a36ca600e2a70b49492f1 Jan 10 07:07:00 crc kubenswrapper[4810]: I0110 07:07:00.160638 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" event={"ID":"7a0139da-f5bb-42bc-aba9-d6dc692704a1","Type":"ContainerStarted","Data":"9de355af30aa962dc7184744f2618dd583541b64f03a36ca600e2a70b49492f1"} Jan 10 07:07:00 crc kubenswrapper[4810]: I0110 07:07:00.160657 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-p8ds4" Jan 10 07:07:00 crc kubenswrapper[4810]: I0110 07:07:00.242744 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-p8ds4"] Jan 10 07:07:00 crc kubenswrapper[4810]: I0110 07:07:00.251028 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-p8ds4"] Jan 10 07:07:01 crc kubenswrapper[4810]: I0110 07:07:01.701416 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d663717-3c3f-4ac3-b5f4-63b9f2db9c68" path="/var/lib/kubelet/pods/6d663717-3c3f-4ac3-b5f4-63b9f2db9c68/volumes" Jan 10 07:07:02 crc kubenswrapper[4810]: I0110 07:07:02.598310 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:07:02 crc kubenswrapper[4810]: E0110 07:07:02.598469 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:07:02 crc kubenswrapper[4810]: E0110 07:07:02.598731 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:07:02 crc kubenswrapper[4810]: E0110 07:07:02.598777 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift podName:b2d79eee-543f-453a-80a5-2c7b4f072992 nodeName:}" failed. No retries permitted until 2026-01-10 07:07:10.598762619 +0000 UTC m=+1259.214255502 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift") pod "swift-storage-0" (UID: "b2d79eee-543f-453a-80a5-2c7b4f072992") : configmap "swift-ring-files" not found Jan 10 07:07:04 crc kubenswrapper[4810]: I0110 07:07:04.209494 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" event={"ID":"7a0139da-f5bb-42bc-aba9-d6dc692704a1","Type":"ContainerStarted","Data":"f979642fefe549adb5beed83d78e729c7a655e570f9c613a7acb02d485a72c3e"} Jan 10 07:07:04 crc kubenswrapper[4810]: I0110 07:07:04.243437 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" podStartSLOduration=6.243419035 podStartE2EDuration="6.243419035s" podCreationTimestamp="2026-01-10 07:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:07:04.23650695 +0000 UTC m=+1252.851999833" watchObservedRunningTime="2026-01-10 07:07:04.243419035 +0000 UTC m=+1252.858911918" Jan 10 07:07:10 crc kubenswrapper[4810]: I0110 07:07:10.649689 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:07:10 crc kubenswrapper[4810]: E0110 07:07:10.649900 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:07:10 crc kubenswrapper[4810]: E0110 07:07:10.650435 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:07:10 crc kubenswrapper[4810]: E0110 07:07:10.650515 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift podName:b2d79eee-543f-453a-80a5-2c7b4f072992 nodeName:}" failed. No retries permitted until 2026-01-10 07:07:26.650488378 +0000 UTC m=+1275.265981281 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift") pod "swift-storage-0" (UID: "b2d79eee-543f-453a-80a5-2c7b4f072992") : configmap "swift-ring-files" not found Jan 10 07:07:13 crc kubenswrapper[4810]: I0110 07:07:13.283980 4810 generic.go:334] "Generic (PLEG): container finished" podID="7a0139da-f5bb-42bc-aba9-d6dc692704a1" containerID="f979642fefe549adb5beed83d78e729c7a655e570f9c613a7acb02d485a72c3e" exitCode=0 Jan 10 07:07:13 crc kubenswrapper[4810]: I0110 07:07:13.284097 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" event={"ID":"7a0139da-f5bb-42bc-aba9-d6dc692704a1","Type":"ContainerDied","Data":"f979642fefe549adb5beed83d78e729c7a655e570f9c613a7acb02d485a72c3e"} Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.716856 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.822895 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-scripts\") pod \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.823022 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xdrq\" (UniqueName: \"kubernetes.io/projected/7a0139da-f5bb-42bc-aba9-d6dc692704a1-kube-api-access-7xdrq\") pod \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.823074 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-swiftconf\") pod \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.823133 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-dispersionconf\") pod \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.823172 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-ring-data-devices\") pod \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.823233 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a0139da-f5bb-42bc-aba9-d6dc692704a1-etc-swift\") pod \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\" (UID: \"7a0139da-f5bb-42bc-aba9-d6dc692704a1\") " Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.823999 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7a0139da-f5bb-42bc-aba9-d6dc692704a1" (UID: "7a0139da-f5bb-42bc-aba9-d6dc692704a1"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.824171 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0139da-f5bb-42bc-aba9-d6dc692704a1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7a0139da-f5bb-42bc-aba9-d6dc692704a1" (UID: "7a0139da-f5bb-42bc-aba9-d6dc692704a1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.832218 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7a0139da-f5bb-42bc-aba9-d6dc692704a1" (UID: "7a0139da-f5bb-42bc-aba9-d6dc692704a1"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.841467 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0139da-f5bb-42bc-aba9-d6dc692704a1-kube-api-access-7xdrq" (OuterVolumeSpecName: "kube-api-access-7xdrq") pod "7a0139da-f5bb-42bc-aba9-d6dc692704a1" (UID: "7a0139da-f5bb-42bc-aba9-d6dc692704a1"). InnerVolumeSpecName "kube-api-access-7xdrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.841912 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7a0139da-f5bb-42bc-aba9-d6dc692704a1" (UID: "7a0139da-f5bb-42bc-aba9-d6dc692704a1"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.859850 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-scripts" (OuterVolumeSpecName: "scripts") pod "7a0139da-f5bb-42bc-aba9-d6dc692704a1" (UID: "7a0139da-f5bb-42bc-aba9-d6dc692704a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.925352 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.925394 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.925409 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7a0139da-f5bb-42bc-aba9-d6dc692704a1-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.925421 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a0139da-f5bb-42bc-aba9-d6dc692704a1-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.925434 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xdrq\" (UniqueName: \"kubernetes.io/projected/7a0139da-f5bb-42bc-aba9-d6dc692704a1-kube-api-access-7xdrq\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:14 crc kubenswrapper[4810]: I0110 07:07:14.925448 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7a0139da-f5bb-42bc-aba9-d6dc692704a1-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:15 crc kubenswrapper[4810]: I0110 07:07:15.300910 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" event={"ID":"7a0139da-f5bb-42bc-aba9-d6dc692704a1","Type":"ContainerDied","Data":"9de355af30aa962dc7184744f2618dd583541b64f03a36ca600e2a70b49492f1"} Jan 10 07:07:15 crc kubenswrapper[4810]: I0110 07:07:15.301332 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9de355af30aa962dc7184744f2618dd583541b64f03a36ca600e2a70b49492f1" Jan 10 07:07:15 crc kubenswrapper[4810]: I0110 07:07:15.300979 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-gcgk6" Jan 10 07:07:20 crc kubenswrapper[4810]: I0110 07:07:20.883585 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:07:20 crc kubenswrapper[4810]: I0110 07:07:20.883833 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:07:20 crc kubenswrapper[4810]: I0110 07:07:20.883874 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 07:07:20 crc kubenswrapper[4810]: I0110 07:07:20.884503 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef69a96aed65e9ebe56b671768834cb4b9361c0d3969bd6b62fbd2f9b5387011"} pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 10 07:07:20 crc kubenswrapper[4810]: I0110 07:07:20.884550 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" containerID="cri-o://ef69a96aed65e9ebe56b671768834cb4b9361c0d3969bd6b62fbd2f9b5387011" gracePeriod=600 Jan 10 07:07:21 crc kubenswrapper[4810]: I0110 07:07:21.353693 4810 generic.go:334] "Generic (PLEG): container finished" podID="b5b79429-9259-412f-bab8-27865ab7029b" containerID="ef69a96aed65e9ebe56b671768834cb4b9361c0d3969bd6b62fbd2f9b5387011" exitCode=0 Jan 10 07:07:21 crc kubenswrapper[4810]: I0110 07:07:21.353769 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerDied","Data":"ef69a96aed65e9ebe56b671768834cb4b9361c0d3969bd6b62fbd2f9b5387011"} Jan 10 07:07:21 crc kubenswrapper[4810]: I0110 07:07:21.353821 4810 scope.go:117] "RemoveContainer" containerID="20d4ffbd9b363df7f1755c8df4cc04082344a1639cdb27e352856dec031d294d" Jan 10 07:07:22 crc kubenswrapper[4810]: I0110 07:07:22.360515 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"a1621bdd26d8919cc52b7877444181bced5c9989faf9461e3b82b9a71cd444d4"} Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.301614 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7"] Jan 10 07:07:24 crc kubenswrapper[4810]: E0110 07:07:24.302341 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0139da-f5bb-42bc-aba9-d6dc692704a1" containerName="swift-ring-rebalance" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.302353 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0139da-f5bb-42bc-aba9-d6dc692704a1" containerName="swift-ring-rebalance" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.302506 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0139da-f5bb-42bc-aba9-d6dc692704a1" containerName="swift-ring-rebalance" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.303151 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.305955 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.314111 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-run-httpd\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.314351 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-564n7\" (UniqueName: \"kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-kube-api-access-564n7\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.314410 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-log-httpd\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.314446 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80dbb492-98d6-42b7-9c9a-54ebf810716e-config-data\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.314591 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-etc-swift\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.330449 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7"] Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.415433 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-run-httpd\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.415570 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-564n7\" (UniqueName: \"kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-kube-api-access-564n7\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.415606 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-log-httpd\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.415632 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80dbb492-98d6-42b7-9c9a-54ebf810716e-config-data\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.415676 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-etc-swift\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.416112 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-run-httpd\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.416343 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-log-httpd\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.423299 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80dbb492-98d6-42b7-9c9a-54ebf810716e-config-data\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.426533 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-etc-swift\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.442969 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-564n7\" (UniqueName: \"kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-kube-api-access-564n7\") pod \"swift-proxy-5d79bddd55-8dsk7\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:24 crc kubenswrapper[4810]: I0110 07:07:24.620112 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:26 crc kubenswrapper[4810]: I0110 07:07:26.334835 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7"] Jan 10 07:07:26 crc kubenswrapper[4810]: I0110 07:07:26.389019 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" event={"ID":"80dbb492-98d6-42b7-9c9a-54ebf810716e","Type":"ContainerStarted","Data":"fec9226819681d1a0baafc6ecdeb1ced8f4908e79751fec51ce40300b591351a"} Jan 10 07:07:26 crc kubenswrapper[4810]: I0110 07:07:26.661489 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:07:26 crc kubenswrapper[4810]: I0110 07:07:26.667848 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift\") pod \"swift-storage-0\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:07:26 crc kubenswrapper[4810]: I0110 07:07:26.838898 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:07:27 crc kubenswrapper[4810]: I0110 07:07:27.400359 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" event={"ID":"80dbb492-98d6-42b7-9c9a-54ebf810716e","Type":"ContainerStarted","Data":"3f52df5ff0cd71dd3547d53f47753e6de68179edbb7e0c45392caaef148a55d1"} Jan 10 07:07:27 crc kubenswrapper[4810]: W0110 07:07:27.405059 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2d79eee_543f_453a_80a5_2c7b4f072992.slice/crio-e7ac2bc01668a8ca353582c99b9164aff03d9de8b3e24f459f1b297c79564726 WatchSource:0}: Error finding container e7ac2bc01668a8ca353582c99b9164aff03d9de8b3e24f459f1b297c79564726: Status 404 returned error can't find the container with id e7ac2bc01668a8ca353582c99b9164aff03d9de8b3e24f459f1b297c79564726 Jan 10 07:07:27 crc kubenswrapper[4810]: I0110 07:07:27.406387 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:07:28 crc kubenswrapper[4810]: I0110 07:07:28.408785 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" event={"ID":"80dbb492-98d6-42b7-9c9a-54ebf810716e","Type":"ContainerStarted","Data":"e16d2c4dce89ebe8aec5b66f6973c56dba19fa81902d72b670f5e5e8beae1d1f"} Jan 10 07:07:28 crc kubenswrapper[4810]: I0110 07:07:28.409052 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:28 crc kubenswrapper[4810]: I0110 07:07:28.409065 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:28 crc kubenswrapper[4810]: I0110 07:07:28.410821 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6"} Jan 10 07:07:28 crc kubenswrapper[4810]: I0110 07:07:28.410864 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd"} Jan 10 07:07:28 crc kubenswrapper[4810]: I0110 07:07:28.410879 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"e7ac2bc01668a8ca353582c99b9164aff03d9de8b3e24f459f1b297c79564726"} Jan 10 07:07:28 crc kubenswrapper[4810]: I0110 07:07:28.427047 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" podStartSLOduration=4.427023168 podStartE2EDuration="4.427023168s" podCreationTimestamp="2026-01-10 07:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:07:28.424879777 +0000 UTC m=+1277.040372670" watchObservedRunningTime="2026-01-10 07:07:28.427023168 +0000 UTC m=+1277.042516051" Jan 10 07:07:29 crc kubenswrapper[4810]: I0110 07:07:29.425700 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee"} Jan 10 07:07:29 crc kubenswrapper[4810]: I0110 07:07:29.426230 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e"} Jan 10 07:07:30 crc kubenswrapper[4810]: I0110 07:07:30.433907 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435"} Jan 10 07:07:30 crc kubenswrapper[4810]: I0110 07:07:30.433944 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96"} Jan 10 07:07:30 crc kubenswrapper[4810]: I0110 07:07:30.433955 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c"} Jan 10 07:07:30 crc kubenswrapper[4810]: I0110 07:07:30.433964 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325"} Jan 10 07:07:32 crc kubenswrapper[4810]: I0110 07:07:32.461447 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf"} Jan 10 07:07:32 crc kubenswrapper[4810]: I0110 07:07:32.462049 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e"} Jan 10 07:07:32 crc kubenswrapper[4810]: I0110 07:07:32.462071 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b"} Jan 10 07:07:33 crc kubenswrapper[4810]: I0110 07:07:33.477975 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6"} Jan 10 07:07:34 crc kubenswrapper[4810]: I0110 07:07:34.517072 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8"} Jan 10 07:07:34 crc kubenswrapper[4810]: I0110 07:07:34.517176 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126"} Jan 10 07:07:34 crc kubenswrapper[4810]: I0110 07:07:34.622971 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:34 crc kubenswrapper[4810]: I0110 07:07:34.624795 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:35 crc kubenswrapper[4810]: I0110 07:07:35.534677 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c"} Jan 10 07:07:35 crc kubenswrapper[4810]: I0110 07:07:35.535067 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerStarted","Data":"fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43"} Jan 10 07:07:35 crc kubenswrapper[4810]: I0110 07:07:35.650592 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=42.650577655 podStartE2EDuration="42.650577655s" podCreationTimestamp="2026-01-10 07:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:07:35.64702367 +0000 UTC m=+1284.262516563" watchObservedRunningTime="2026-01-10 07:07:35.650577655 +0000 UTC m=+1284.266070538" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.028284 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz"] Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.029837 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.033796 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.038836 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.075479 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz"] Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.141174 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-ring-data-devices\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.141248 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-swiftconf\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.141287 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-etc-swift\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.141350 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-dispersionconf\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.141440 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24mrp\" (UniqueName: \"kubernetes.io/projected/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-kube-api-access-24mrp\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.141540 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-scripts\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.243130 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-etc-swift\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.243240 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-dispersionconf\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.243293 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24mrp\" (UniqueName: \"kubernetes.io/projected/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-kube-api-access-24mrp\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.243348 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-scripts\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.243434 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-ring-data-devices\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.243470 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-swiftconf\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.243659 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-etc-swift\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.244492 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-scripts\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.244816 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-ring-data-devices\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.249607 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-dispersionconf\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.250052 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-swiftconf\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.261492 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24mrp\" (UniqueName: \"kubernetes.io/projected/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-kube-api-access-24mrp\") pod \"swift-ring-rebalance-debug-f4tbz\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.370181 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:37 crc kubenswrapper[4810]: I0110 07:07:37.847480 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz"] Jan 10 07:07:37 crc kubenswrapper[4810]: W0110 07:07:37.849741 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dc4ce5a_41aa_4be3_a5aa_0fbc24803b07.slice/crio-2fd6a7296f112b01ef6b109f13f52624ac0c75c254a448a02d0a788ff3d01994 WatchSource:0}: Error finding container 2fd6a7296f112b01ef6b109f13f52624ac0c75c254a448a02d0a788ff3d01994: Status 404 returned error can't find the container with id 2fd6a7296f112b01ef6b109f13f52624ac0c75c254a448a02d0a788ff3d01994 Jan 10 07:07:38 crc kubenswrapper[4810]: I0110 07:07:38.559527 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" event={"ID":"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07","Type":"ContainerStarted","Data":"7865c656949217c626d3c92047ddf9ed9fc4984d4128794f19fff4f8b9cfda76"} Jan 10 07:07:38 crc kubenswrapper[4810]: I0110 07:07:38.559801 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" event={"ID":"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07","Type":"ContainerStarted","Data":"2fd6a7296f112b01ef6b109f13f52624ac0c75c254a448a02d0a788ff3d01994"} Jan 10 07:07:38 crc kubenswrapper[4810]: I0110 07:07:38.586131 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" podStartSLOduration=2.586114925 podStartE2EDuration="2.586114925s" podCreationTimestamp="2026-01-10 07:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:07:38.581036273 +0000 UTC m=+1287.196529176" watchObservedRunningTime="2026-01-10 07:07:38.586114925 +0000 UTC m=+1287.201607808" Jan 10 07:07:42 crc kubenswrapper[4810]: I0110 07:07:42.592263 4810 generic.go:334] "Generic (PLEG): container finished" podID="9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07" containerID="7865c656949217c626d3c92047ddf9ed9fc4984d4128794f19fff4f8b9cfda76" exitCode=0 Jan 10 07:07:42 crc kubenswrapper[4810]: I0110 07:07:42.592422 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" event={"ID":"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07","Type":"ContainerDied","Data":"7865c656949217c626d3c92047ddf9ed9fc4984d4128794f19fff4f8b9cfda76"} Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.878069 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.918335 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz"] Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.923885 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz"] Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.987652 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24mrp\" (UniqueName: \"kubernetes.io/projected/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-kube-api-access-24mrp\") pod \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.987803 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-ring-data-devices\") pod \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.987845 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-swiftconf\") pod \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.987945 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-etc-swift\") pod \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.988020 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-dispersionconf\") pod \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.988048 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-scripts\") pod \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\" (UID: \"9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07\") " Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.989567 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07" (UID: "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.990272 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07" (UID: "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:07:43 crc kubenswrapper[4810]: I0110 07:07:43.995037 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-kube-api-access-24mrp" (OuterVolumeSpecName: "kube-api-access-24mrp") pod "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07" (UID: "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07"). InnerVolumeSpecName "kube-api-access-24mrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.006150 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-scripts" (OuterVolumeSpecName: "scripts") pod "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07" (UID: "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.010136 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07" (UID: "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.013299 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07" (UID: "9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.079736 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8"] Jan 10 07:07:44 crc kubenswrapper[4810]: E0110 07:07:44.080234 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07" containerName="swift-ring-rebalance" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.080250 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07" containerName="swift-ring-rebalance" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.080370 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07" containerName="swift-ring-rebalance" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.080783 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.088367 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8"] Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.091776 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24mrp\" (UniqueName: \"kubernetes.io/projected/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-kube-api-access-24mrp\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.091857 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.091887 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.091912 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.091932 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.091950 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.193714 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-swiftconf\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.193771 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdr9t\" (UniqueName: \"kubernetes.io/projected/25b5813a-9525-4445-8969-bac7c8c25ee7-kube-api-access-zdr9t\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.193796 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25b5813a-9525-4445-8969-bac7c8c25ee7-etc-swift\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.193818 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-ring-data-devices\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.193860 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-scripts\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.193884 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-dispersionconf\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.295956 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-swiftconf\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.296477 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdr9t\" (UniqueName: \"kubernetes.io/projected/25b5813a-9525-4445-8969-bac7c8c25ee7-kube-api-access-zdr9t\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.296676 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25b5813a-9525-4445-8969-bac7c8c25ee7-etc-swift\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.296847 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-ring-data-devices\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.297047 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-scripts\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.297273 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-dispersionconf\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.297566 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25b5813a-9525-4445-8969-bac7c8c25ee7-etc-swift\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.298065 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-ring-data-devices\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.298487 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-scripts\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.302995 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-swiftconf\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.304327 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-dispersionconf\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.317994 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdr9t\" (UniqueName: \"kubernetes.io/projected/25b5813a-9525-4445-8969-bac7c8c25ee7-kube-api-access-zdr9t\") pod \"swift-ring-rebalance-debug-ts5q8\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.398177 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.609056 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd6a7296f112b01ef6b109f13f52624ac0c75c254a448a02d0a788ff3d01994" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.609136 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-f4tbz" Jan 10 07:07:44 crc kubenswrapper[4810]: I0110 07:07:44.817391 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8"] Jan 10 07:07:44 crc kubenswrapper[4810]: W0110 07:07:44.824430 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25b5813a_9525_4445_8969_bac7c8c25ee7.slice/crio-b9e97fb33a2c4d10c66ce08ca7d5f671c1244de4e60a7c43e477f05d2cea1dc8 WatchSource:0}: Error finding container b9e97fb33a2c4d10c66ce08ca7d5f671c1244de4e60a7c43e477f05d2cea1dc8: Status 404 returned error can't find the container with id b9e97fb33a2c4d10c66ce08ca7d5f671c1244de4e60a7c43e477f05d2cea1dc8 Jan 10 07:07:45 crc kubenswrapper[4810]: I0110 07:07:45.617761 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" event={"ID":"25b5813a-9525-4445-8969-bac7c8c25ee7","Type":"ContainerStarted","Data":"8b3b2744a59e7d7c7dc79f450b219e30c086dc5748c7c5e18476101ceb1fd3f6"} Jan 10 07:07:45 crc kubenswrapper[4810]: I0110 07:07:45.617801 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" event={"ID":"25b5813a-9525-4445-8969-bac7c8c25ee7","Type":"ContainerStarted","Data":"b9e97fb33a2c4d10c66ce08ca7d5f671c1244de4e60a7c43e477f05d2cea1dc8"} Jan 10 07:07:45 crc kubenswrapper[4810]: I0110 07:07:45.651575 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" podStartSLOduration=1.6515499139999998 podStartE2EDuration="1.651549914s" podCreationTimestamp="2026-01-10 07:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:07:45.644380763 +0000 UTC m=+1294.259873706" watchObservedRunningTime="2026-01-10 07:07:45.651549914 +0000 UTC m=+1294.267042817" Jan 10 07:07:45 crc kubenswrapper[4810]: I0110 07:07:45.702941 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07" path="/var/lib/kubelet/pods/9dc4ce5a-41aa-4be3-a5aa-0fbc24803b07/volumes" Jan 10 07:07:46 crc kubenswrapper[4810]: I0110 07:07:46.628331 4810 generic.go:334] "Generic (PLEG): container finished" podID="25b5813a-9525-4445-8969-bac7c8c25ee7" containerID="8b3b2744a59e7d7c7dc79f450b219e30c086dc5748c7c5e18476101ceb1fd3f6" exitCode=0 Jan 10 07:07:46 crc kubenswrapper[4810]: I0110 07:07:46.628386 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" event={"ID":"25b5813a-9525-4445-8969-bac7c8c25ee7","Type":"ContainerDied","Data":"8b3b2744a59e7d7c7dc79f450b219e30c086dc5748c7c5e18476101ceb1fd3f6"} Jan 10 07:07:47 crc kubenswrapper[4810]: I0110 07:07:47.967449 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.009815 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8"] Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.014613 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8"] Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.149741 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25b5813a-9525-4445-8969-bac7c8c25ee7-etc-swift\") pod \"25b5813a-9525-4445-8969-bac7c8c25ee7\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.149803 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-swiftconf\") pod \"25b5813a-9525-4445-8969-bac7c8c25ee7\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.149838 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-scripts\") pod \"25b5813a-9525-4445-8969-bac7c8c25ee7\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.149881 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-dispersionconf\") pod \"25b5813a-9525-4445-8969-bac7c8c25ee7\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.149897 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdr9t\" (UniqueName: \"kubernetes.io/projected/25b5813a-9525-4445-8969-bac7c8c25ee7-kube-api-access-zdr9t\") pod \"25b5813a-9525-4445-8969-bac7c8c25ee7\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.149972 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-ring-data-devices\") pod \"25b5813a-9525-4445-8969-bac7c8c25ee7\" (UID: \"25b5813a-9525-4445-8969-bac7c8c25ee7\") " Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.150531 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "25b5813a-9525-4445-8969-bac7c8c25ee7" (UID: "25b5813a-9525-4445-8969-bac7c8c25ee7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.150547 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25b5813a-9525-4445-8969-bac7c8c25ee7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "25b5813a-9525-4445-8969-bac7c8c25ee7" (UID: "25b5813a-9525-4445-8969-bac7c8c25ee7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.163383 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b5813a-9525-4445-8969-bac7c8c25ee7-kube-api-access-zdr9t" (OuterVolumeSpecName: "kube-api-access-zdr9t") pod "25b5813a-9525-4445-8969-bac7c8c25ee7" (UID: "25b5813a-9525-4445-8969-bac7c8c25ee7"). InnerVolumeSpecName "kube-api-access-zdr9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.170189 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-scripts" (OuterVolumeSpecName: "scripts") pod "25b5813a-9525-4445-8969-bac7c8c25ee7" (UID: "25b5813a-9525-4445-8969-bac7c8c25ee7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.176215 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "25b5813a-9525-4445-8969-bac7c8c25ee7" (UID: "25b5813a-9525-4445-8969-bac7c8c25ee7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.179330 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "25b5813a-9525-4445-8969-bac7c8c25ee7" (UID: "25b5813a-9525-4445-8969-bac7c8c25ee7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.251140 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdr9t\" (UniqueName: \"kubernetes.io/projected/25b5813a-9525-4445-8969-bac7c8c25ee7-kube-api-access-zdr9t\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.251169 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.251178 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.251187 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/25b5813a-9525-4445-8969-bac7c8c25ee7-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.251209 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/25b5813a-9525-4445-8969-bac7c8c25ee7-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.251217 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/25b5813a-9525-4445-8969-bac7c8c25ee7-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.677030 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9e97fb33a2c4d10c66ce08ca7d5f671c1244de4e60a7c43e477f05d2cea1dc8" Jan 10 07:07:48 crc kubenswrapper[4810]: I0110 07:07:48.677116 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ts5q8" Jan 10 07:07:49 crc kubenswrapper[4810]: I0110 07:07:49.706294 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b5813a-9525-4445-8969-bac7c8c25ee7" path="/var/lib/kubelet/pods/25b5813a-9525-4445-8969-bac7c8c25ee7/volumes" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.714848 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9"] Jan 10 07:07:50 crc kubenswrapper[4810]: E0110 07:07:50.715180 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25b5813a-9525-4445-8969-bac7c8c25ee7" containerName="swift-ring-rebalance" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.715212 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b5813a-9525-4445-8969-bac7c8c25ee7" containerName="swift-ring-rebalance" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.715395 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="25b5813a-9525-4445-8969-bac7c8c25ee7" containerName="swift-ring-rebalance" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.715927 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.717844 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.718267 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.729492 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9"] Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.807770 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-swiftconf\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.807846 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-dispersionconf\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.807965 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-etc-swift\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.808021 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-ring-data-devices\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.808051 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffpt7\" (UniqueName: \"kubernetes.io/projected/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-kube-api-access-ffpt7\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.808125 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-scripts\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.909185 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-etc-swift\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.909260 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-ring-data-devices\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.909279 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffpt7\" (UniqueName: \"kubernetes.io/projected/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-kube-api-access-ffpt7\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.909299 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-scripts\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.909387 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-swiftconf\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.909414 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-dispersionconf\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.909667 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-etc-swift\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.910123 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-scripts\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.910254 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-ring-data-devices\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.915819 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-dispersionconf\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.918301 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-swiftconf\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:50 crc kubenswrapper[4810]: I0110 07:07:50.927814 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffpt7\" (UniqueName: \"kubernetes.io/projected/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-kube-api-access-ffpt7\") pod \"swift-ring-rebalance-debug-w6nx9\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:51 crc kubenswrapper[4810]: I0110 07:07:51.033458 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:51 crc kubenswrapper[4810]: I0110 07:07:51.271615 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9"] Jan 10 07:07:51 crc kubenswrapper[4810]: I0110 07:07:51.708350 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" event={"ID":"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4","Type":"ContainerStarted","Data":"928197f951ba2ac48b458084217d74897ef773c0ccb17b2e2927879426c42564"} Jan 10 07:07:51 crc kubenswrapper[4810]: I0110 07:07:51.708659 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" event={"ID":"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4","Type":"ContainerStarted","Data":"4fe3caea8d3a6ecb967915ef5b6d76c5e11229a0b9773f2d164732095f54dc53"} Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.716545 4810 generic.go:334] "Generic (PLEG): container finished" podID="47a2ced6-b901-4ec7-8c4b-6d846a41d8e4" containerID="928197f951ba2ac48b458084217d74897ef773c0ccb17b2e2927879426c42564" exitCode=0 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.716660 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" event={"ID":"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4","Type":"ContainerDied","Data":"928197f951ba2ac48b458084217d74897ef773c0ccb17b2e2927879426c42564"} Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.758954 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9"] Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.765560 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9"] Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.871242 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-gcgk6"] Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.879600 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-gcgk6"] Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.885986 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887470 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-server" containerID="cri-o://ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887711 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-server" containerID="cri-o://02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887762 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-updater" containerID="cri-o://8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887798 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-auditor" containerID="cri-o://2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887827 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-replicator" containerID="cri-o://bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887845 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-sharder" containerID="cri-o://21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887864 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-auditor" containerID="cri-o://739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887879 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-updater" containerID="cri-o://c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887861 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-server" containerID="cri-o://157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887918 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="rsync" containerID="cri-o://d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887960 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-replicator" containerID="cri-o://242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887783 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="swift-recon-cron" containerID="cri-o://fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.887993 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-auditor" containerID="cri-o://2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.888000 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-reaper" containerID="cri-o://d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.888009 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-replicator" containerID="cri-o://32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.888317 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-expirer" containerID="cri-o://5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.940019 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7"] Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.940256 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" podUID="80dbb492-98d6-42b7-9c9a-54ebf810716e" containerName="proxy-httpd" containerID="cri-o://3f52df5ff0cd71dd3547d53f47753e6de68179edbb7e0c45392caaef148a55d1" gracePeriod=30 Jan 10 07:07:52 crc kubenswrapper[4810]: I0110 07:07:52.940394 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" podUID="80dbb492-98d6-42b7-9c9a-54ebf810716e" containerName="proxy-server" containerID="cri-o://e16d2c4dce89ebe8aec5b66f6973c56dba19fa81902d72b670f5e5e8beae1d1f" gracePeriod=30 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.701309 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a0139da-f5bb-42bc-aba9-d6dc692704a1" path="/var/lib/kubelet/pods/7a0139da-f5bb-42bc-aba9-d6dc692704a1/volumes" Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.726185 4810 generic.go:334] "Generic (PLEG): container finished" podID="80dbb492-98d6-42b7-9c9a-54ebf810716e" containerID="e16d2c4dce89ebe8aec5b66f6973c56dba19fa81902d72b670f5e5e8beae1d1f" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.726235 4810 generic.go:334] "Generic (PLEG): container finished" podID="80dbb492-98d6-42b7-9c9a-54ebf810716e" containerID="3f52df5ff0cd71dd3547d53f47753e6de68179edbb7e0c45392caaef148a55d1" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.726292 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" event={"ID":"80dbb492-98d6-42b7-9c9a-54ebf810716e","Type":"ContainerDied","Data":"e16d2c4dce89ebe8aec5b66f6973c56dba19fa81902d72b670f5e5e8beae1d1f"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.726350 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" event={"ID":"80dbb492-98d6-42b7-9c9a-54ebf810716e","Type":"ContainerDied","Data":"3f52df5ff0cd71dd3547d53f47753e6de68179edbb7e0c45392caaef148a55d1"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733004 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733032 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733040 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733050 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733056 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733067 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733076 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733085 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733077 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733141 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733155 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733166 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733174 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733093 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733213 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733222 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733228 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733234 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733240 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733246 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd" exitCode=0 Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733186 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733346 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733363 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733375 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733386 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733407 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733419 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733429 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733440 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.733451 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd"} Jan 10 07:07:53 crc kubenswrapper[4810]: I0110 07:07:53.923949 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.057103 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffpt7\" (UniqueName: \"kubernetes.io/projected/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-kube-api-access-ffpt7\") pod \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.057278 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-swiftconf\") pod \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.057315 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-ring-data-devices\") pod \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.057339 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-scripts\") pod \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.057376 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-dispersionconf\") pod \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.057422 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-etc-swift\") pod \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\" (UID: \"47a2ced6-b901-4ec7-8c4b-6d846a41d8e4\") " Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.057933 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4" (UID: "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.058232 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4" (UID: "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.065323 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-kube-api-access-ffpt7" (OuterVolumeSpecName: "kube-api-access-ffpt7") pod "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4" (UID: "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4"). InnerVolumeSpecName "kube-api-access-ffpt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.080777 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-scripts" (OuterVolumeSpecName: "scripts") pod "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4" (UID: "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.091084 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4" (UID: "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.091396 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4" (UID: "47a2ced6-b901-4ec7-8c4b-6d846a41d8e4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.158483 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.158515 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffpt7\" (UniqueName: \"kubernetes.io/projected/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-kube-api-access-ffpt7\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.158527 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.158535 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.158545 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.158553 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.375771 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.564236 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80dbb492-98d6-42b7-9c9a-54ebf810716e-config-data\") pod \"80dbb492-98d6-42b7-9c9a-54ebf810716e\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.564663 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-564n7\" (UniqueName: \"kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-kube-api-access-564n7\") pod \"80dbb492-98d6-42b7-9c9a-54ebf810716e\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.564730 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-log-httpd\") pod \"80dbb492-98d6-42b7-9c9a-54ebf810716e\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.564846 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-run-httpd\") pod \"80dbb492-98d6-42b7-9c9a-54ebf810716e\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.564864 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-etc-swift\") pod \"80dbb492-98d6-42b7-9c9a-54ebf810716e\" (UID: \"80dbb492-98d6-42b7-9c9a-54ebf810716e\") " Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.565599 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "80dbb492-98d6-42b7-9c9a-54ebf810716e" (UID: "80dbb492-98d6-42b7-9c9a-54ebf810716e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.565878 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "80dbb492-98d6-42b7-9c9a-54ebf810716e" (UID: "80dbb492-98d6-42b7-9c9a-54ebf810716e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.569831 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "80dbb492-98d6-42b7-9c9a-54ebf810716e" (UID: "80dbb492-98d6-42b7-9c9a-54ebf810716e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.580747 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-kube-api-access-564n7" (OuterVolumeSpecName: "kube-api-access-564n7") pod "80dbb492-98d6-42b7-9c9a-54ebf810716e" (UID: "80dbb492-98d6-42b7-9c9a-54ebf810716e"). InnerVolumeSpecName "kube-api-access-564n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.597422 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dbb492-98d6-42b7-9c9a-54ebf810716e-config-data" (OuterVolumeSpecName: "config-data") pod "80dbb492-98d6-42b7-9c9a-54ebf810716e" (UID: "80dbb492-98d6-42b7-9c9a-54ebf810716e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.666019 4810 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.666048 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.666057 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80dbb492-98d6-42b7-9c9a-54ebf810716e-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.666067 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-564n7\" (UniqueName: \"kubernetes.io/projected/80dbb492-98d6-42b7-9c9a-54ebf810716e-kube-api-access-564n7\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.666075 4810 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/80dbb492-98d6-42b7-9c9a-54ebf810716e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.742082 4810 scope.go:117] "RemoveContainer" containerID="928197f951ba2ac48b458084217d74897ef773c0ccb17b2e2927879426c42564" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.742138 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-w6nx9" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.744339 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" event={"ID":"80dbb492-98d6-42b7-9c9a-54ebf810716e","Type":"ContainerDied","Data":"fec9226819681d1a0baafc6ecdeb1ced8f4908e79751fec51ce40300b591351a"} Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.744391 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.778057 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7"] Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.780090 4810 scope.go:117] "RemoveContainer" containerID="e16d2c4dce89ebe8aec5b66f6973c56dba19fa81902d72b670f5e5e8beae1d1f" Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.785923 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-5d79bddd55-8dsk7"] Jan 10 07:07:54 crc kubenswrapper[4810]: I0110 07:07:54.795534 4810 scope.go:117] "RemoveContainer" containerID="3f52df5ff0cd71dd3547d53f47753e6de68179edbb7e0c45392caaef148a55d1" Jan 10 07:07:55 crc kubenswrapper[4810]: I0110 07:07:55.707046 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a2ced6-b901-4ec7-8c4b-6d846a41d8e4" path="/var/lib/kubelet/pods/47a2ced6-b901-4ec7-8c4b-6d846a41d8e4/volumes" Jan 10 07:07:55 crc kubenswrapper[4810]: I0110 07:07:55.708055 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80dbb492-98d6-42b7-9c9a-54ebf810716e" path="/var/lib/kubelet/pods/80dbb492-98d6-42b7-9c9a-54ebf810716e/volumes" Jan 10 07:08:23 crc kubenswrapper[4810]: I0110 07:08:23.824325 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:23 crc kubenswrapper[4810]: I0110 07:08:23.950808 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-cache\") pod \"b2d79eee-543f-453a-80a5-2c7b4f072992\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " Jan 10 07:08:23 crc kubenswrapper[4810]: I0110 07:08:23.950852 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b2d79eee-543f-453a-80a5-2c7b4f072992\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " Jan 10 07:08:23 crc kubenswrapper[4810]: I0110 07:08:23.950876 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-lock\") pod \"b2d79eee-543f-453a-80a5-2c7b4f072992\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " Jan 10 07:08:23 crc kubenswrapper[4810]: I0110 07:08:23.950961 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52kzr\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-kube-api-access-52kzr\") pod \"b2d79eee-543f-453a-80a5-2c7b4f072992\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " Jan 10 07:08:23 crc kubenswrapper[4810]: I0110 07:08:23.951036 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift\") pod \"b2d79eee-543f-453a-80a5-2c7b4f072992\" (UID: \"b2d79eee-543f-453a-80a5-2c7b4f072992\") " Jan 10 07:08:23 crc kubenswrapper[4810]: I0110 07:08:23.951626 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-lock" (OuterVolumeSpecName: "lock") pod "b2d79eee-543f-453a-80a5-2c7b4f072992" (UID: "b2d79eee-543f-453a-80a5-2c7b4f072992"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:08:23 crc kubenswrapper[4810]: I0110 07:08:23.951646 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-cache" (OuterVolumeSpecName: "cache") pod "b2d79eee-543f-453a-80a5-2c7b4f072992" (UID: "b2d79eee-543f-453a-80a5-2c7b4f072992"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:08:23 crc kubenswrapper[4810]: I0110 07:08:23.958054 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-kube-api-access-52kzr" (OuterVolumeSpecName: "kube-api-access-52kzr") pod "b2d79eee-543f-453a-80a5-2c7b4f072992" (UID: "b2d79eee-543f-453a-80a5-2c7b4f072992"). InnerVolumeSpecName "kube-api-access-52kzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:08:23 crc kubenswrapper[4810]: I0110 07:08:23.959790 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b2d79eee-543f-453a-80a5-2c7b4f072992" (UID: "b2d79eee-543f-453a-80a5-2c7b4f072992"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:08:23 crc kubenswrapper[4810]: I0110 07:08:23.960055 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "swift") pod "b2d79eee-543f-453a-80a5-2c7b4f072992" (UID: "b2d79eee-543f-453a-80a5-2c7b4f072992"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.052633 4810 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-cache\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.052700 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.052715 4810 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b2d79eee-543f-453a-80a5-2c7b4f072992-lock\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.052728 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52kzr\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-kube-api-access-52kzr\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.052744 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b2d79eee-543f-453a-80a5-2c7b4f072992-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.068276 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.091449 4810 generic.go:334] "Generic (PLEG): container finished" podID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerID="fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43" exitCode=137 Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.091502 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43"} Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.091577 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"b2d79eee-543f-453a-80a5-2c7b4f072992","Type":"ContainerDied","Data":"e7ac2bc01668a8ca353582c99b9164aff03d9de8b3e24f459f1b297c79564726"} Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.091648 4810 scope.go:117] "RemoveContainer" containerID="21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.091706 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.132874 4810 scope.go:117] "RemoveContainer" containerID="fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.138502 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.150922 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.155923 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.157663 4810 scope.go:117] "RemoveContainer" containerID="d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.180262 4810 scope.go:117] "RemoveContainer" containerID="5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.203477 4810 scope.go:117] "RemoveContainer" containerID="8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.218586 4810 scope.go:117] "RemoveContainer" containerID="2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.239821 4810 scope.go:117] "RemoveContainer" containerID="bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.269889 4810 scope.go:117] "RemoveContainer" containerID="02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.293485 4810 scope.go:117] "RemoveContainer" containerID="c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.313776 4810 scope.go:117] "RemoveContainer" containerID="739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.330995 4810 scope.go:117] "RemoveContainer" containerID="242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.351277 4810 scope.go:117] "RemoveContainer" containerID="157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.366082 4810 scope.go:117] "RemoveContainer" containerID="d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.378694 4810 scope.go:117] "RemoveContainer" containerID="2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.397918 4810 scope.go:117] "RemoveContainer" containerID="32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.420023 4810 scope.go:117] "RemoveContainer" containerID="ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.437096 4810 scope.go:117] "RemoveContainer" containerID="21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.437652 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c\": container with ID starting with 21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c not found: ID does not exist" containerID="21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.437680 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c"} err="failed to get container status \"21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c\": rpc error: code = NotFound desc = could not find container \"21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c\": container with ID starting with 21aa74c4dddabb112bfa59799b6c7fd04e091aa1955ffd3b49c17947611ba11c not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.437702 4810 scope.go:117] "RemoveContainer" containerID="fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.438122 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43\": container with ID starting with fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43 not found: ID does not exist" containerID="fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.438141 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43"} err="failed to get container status \"fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43\": rpc error: code = NotFound desc = could not find container \"fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43\": container with ID starting with fe9d85652fdb765c97fe268ca75de2b7ea9e0de4ee99ad1ff904a66183eacf43 not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.438155 4810 scope.go:117] "RemoveContainer" containerID="d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.438439 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8\": container with ID starting with d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8 not found: ID does not exist" containerID="d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.438459 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8"} err="failed to get container status \"d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8\": rpc error: code = NotFound desc = could not find container \"d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8\": container with ID starting with d283baafec6d89466ab8b7c1e7d2446c341711bdf3f51294aa93f5454f5999d8 not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.438473 4810 scope.go:117] "RemoveContainer" containerID="5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.438762 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126\": container with ID starting with 5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126 not found: ID does not exist" containerID="5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.438831 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126"} err="failed to get container status \"5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126\": rpc error: code = NotFound desc = could not find container \"5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126\": container with ID starting with 5c7056f35cfd4b814068f1a465fabd2bb9da7d03469ff0e7e6888abfce380126 not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.438846 4810 scope.go:117] "RemoveContainer" containerID="8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.439287 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6\": container with ID starting with 8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6 not found: ID does not exist" containerID="8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.439367 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6"} err="failed to get container status \"8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6\": rpc error: code = NotFound desc = could not find container \"8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6\": container with ID starting with 8af850d778d484d583186c964ec4702c4926ac5a2fd740a1307acf74356556d6 not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.439413 4810 scope.go:117] "RemoveContainer" containerID="2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.439962 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf\": container with ID starting with 2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf not found: ID does not exist" containerID="2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.440016 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf"} err="failed to get container status \"2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf\": rpc error: code = NotFound desc = could not find container \"2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf\": container with ID starting with 2d41d6a6c9b35996421da3cd5f3d4ce2b54c43b7abd633b2fb715e5aa7ea78bf not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.440043 4810 scope.go:117] "RemoveContainer" containerID="bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.440554 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e\": container with ID starting with bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e not found: ID does not exist" containerID="bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.440587 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e"} err="failed to get container status \"bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e\": rpc error: code = NotFound desc = could not find container \"bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e\": container with ID starting with bb596ef430a70bffada4fa7f41460001c7747ace5e04d548f8b088ca6a7cb60e not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.440611 4810 scope.go:117] "RemoveContainer" containerID="02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.440869 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b\": container with ID starting with 02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b not found: ID does not exist" containerID="02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.440899 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b"} err="failed to get container status \"02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b\": rpc error: code = NotFound desc = could not find container \"02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b\": container with ID starting with 02f9ddf77ea3a06cf569e0c151c8dbb1be8b7d7c3f61d533ff65d3bd91a0328b not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.440916 4810 scope.go:117] "RemoveContainer" containerID="c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.441148 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435\": container with ID starting with c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435 not found: ID does not exist" containerID="c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.441170 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435"} err="failed to get container status \"c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435\": rpc error: code = NotFound desc = could not find container \"c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435\": container with ID starting with c08eaea4389828436b043867ce58d7b6af66523eaaff5acea169c7a7a640d435 not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.441185 4810 scope.go:117] "RemoveContainer" containerID="739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.441455 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96\": container with ID starting with 739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96 not found: ID does not exist" containerID="739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.441497 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96"} err="failed to get container status \"739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96\": rpc error: code = NotFound desc = could not find container \"739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96\": container with ID starting with 739e117d06c76ab794c2c79d6ffe9de45d07e0d09d4ce4e731e628ef9c33eb96 not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.441523 4810 scope.go:117] "RemoveContainer" containerID="242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.441780 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c\": container with ID starting with 242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c not found: ID does not exist" containerID="242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.441805 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c"} err="failed to get container status \"242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c\": rpc error: code = NotFound desc = could not find container \"242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c\": container with ID starting with 242eaa6dc72ffb5fc8835d99ac842986ad7247e01e2f33a15c4eb6050f695b0c not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.441823 4810 scope.go:117] "RemoveContainer" containerID="157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.442024 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325\": container with ID starting with 157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325 not found: ID does not exist" containerID="157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.442049 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325"} err="failed to get container status \"157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325\": rpc error: code = NotFound desc = could not find container \"157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325\": container with ID starting with 157ab48b43b44b78370cf0dba0ea82fa4b2c5cf07495ab5b732b2569cb128325 not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.442066 4810 scope.go:117] "RemoveContainer" containerID="d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.442277 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee\": container with ID starting with d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee not found: ID does not exist" containerID="d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.442299 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee"} err="failed to get container status \"d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee\": rpc error: code = NotFound desc = could not find container \"d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee\": container with ID starting with d31c7f0b776d7086b0b35b49e5526c3cc8f26710c6b19227897042e9ac89dfee not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.442315 4810 scope.go:117] "RemoveContainer" containerID="2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.442598 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e\": container with ID starting with 2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e not found: ID does not exist" containerID="2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.442622 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e"} err="failed to get container status \"2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e\": rpc error: code = NotFound desc = could not find container \"2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e\": container with ID starting with 2f53ce34d9866ac67233d56658a17d9f4fb0cb74b07693536aa72b2557aaa94e not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.442639 4810 scope.go:117] "RemoveContainer" containerID="32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.442858 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6\": container with ID starting with 32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6 not found: ID does not exist" containerID="32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.442879 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6"} err="failed to get container status \"32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6\": rpc error: code = NotFound desc = could not find container \"32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6\": container with ID starting with 32b7067531b62df3ad7e31945d659dc3837f7dfbf95819d55636075bb0a83cb6 not found: ID does not exist" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.442897 4810 scope.go:117] "RemoveContainer" containerID="ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd" Jan 10 07:08:24 crc kubenswrapper[4810]: E0110 07:08:24.443096 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd\": container with ID starting with ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd not found: ID does not exist" containerID="ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd" Jan 10 07:08:24 crc kubenswrapper[4810]: I0110 07:08:24.443119 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd"} err="failed to get container status \"ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd\": rpc error: code = NotFound desc = could not find container \"ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd\": container with ID starting with ae50d21a0f4c7085c72f9fa432f10dc89cafd55acc39d11b53a95f5e0e027ccd not found: ID does not exist" Jan 10 07:08:25 crc kubenswrapper[4810]: I0110 07:08:25.701566 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" path="/var/lib/kubelet/pods/b2d79eee-543f-453a-80a5-2c7b4f072992/volumes" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.272412 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.272784 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-expirer" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.272812 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-expirer" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.272834 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-auditor" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.272846 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-auditor" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.272867 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="rsync" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.272879 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="rsync" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.272893 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-server" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.272906 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-server" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.272931 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-updater" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.272943 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-updater" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.272958 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="swift-recon-cron" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.272969 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="swift-recon-cron" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.272996 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-auditor" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.273008 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-auditor" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.273023 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-server" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.273035 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-server" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.273050 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dbb492-98d6-42b7-9c9a-54ebf810716e" containerName="proxy-server" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.273062 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dbb492-98d6-42b7-9c9a-54ebf810716e" containerName="proxy-server" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.273086 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-sharder" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.273098 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-sharder" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.273115 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-replicator" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.273127 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-replicator" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.273145 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-server" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.273157 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-server" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.273182 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-auditor" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276346 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-auditor" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.276393 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-reaper" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276401 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-reaper" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.276422 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-replicator" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276427 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-replicator" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.276440 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-updater" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276452 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-updater" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.276466 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-replicator" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276472 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-replicator" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.276480 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a2ced6-b901-4ec7-8c4b-6d846a41d8e4" containerName="swift-ring-rebalance" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276487 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a2ced6-b901-4ec7-8c4b-6d846a41d8e4" containerName="swift-ring-rebalance" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.276499 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dbb492-98d6-42b7-9c9a-54ebf810716e" containerName="proxy-httpd" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276505 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dbb492-98d6-42b7-9c9a-54ebf810716e" containerName="proxy-httpd" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276732 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-auditor" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276744 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dbb492-98d6-42b7-9c9a-54ebf810716e" containerName="proxy-httpd" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276750 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-server" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276760 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dbb492-98d6-42b7-9c9a-54ebf810716e" containerName="proxy-server" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276768 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-server" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276778 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-expirer" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276783 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-sharder" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276793 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-auditor" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276800 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-updater" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276809 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-replicator" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276819 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="account-reaper" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276824 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-replicator" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276832 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-auditor" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276840 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-server" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276848 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="object-replicator" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276857 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="rsync" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276867 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="swift-recon-cron" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276876 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d79eee-543f-453a-80a5-2c7b4f072992" containerName="container-updater" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.276884 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a2ced6-b901-4ec7-8c4b-6d846a41d8e4" containerName="swift-ring-rebalance" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.281302 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.284030 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.284117 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.290481 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-vqwgm" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.290992 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.300078 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.305290 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.308448 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.327851 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.334839 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.340874 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.388287 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-cache\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.388372 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-lock\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.388412 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.388451 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.388496 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9s99\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-kube-api-access-g9s99\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.396601 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.407304 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv"] Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.408654 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.418948 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.441711 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv"] Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.490870 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.490915 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.490943 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-cache\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.490963 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldcm9\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-kube-api-access-ldcm9\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.490987 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-lock\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491003 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491030 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-lock\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491047 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491066 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnvnr\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-kube-api-access-lnvnr\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491098 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-cache\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491118 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-lock\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491152 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491178 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkkv4\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-kube-api-access-dkkv4\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491221 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491237 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-log-httpd\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491262 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-run-httpd\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491276 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-cache\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491293 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d78d54f-aba4-479e-b166-73a91e7388b6-config-data\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491310 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.491333 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9s99\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-kube-api-access-g9s99\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.492069 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-cache\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.492425 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-lock\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.492620 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") device mount path \"/mnt/openstack/pv10\"" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.496663 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.496705 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.496759 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift podName:35384be4-c73c-470d-b602-c512b41bd815 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:26.996737236 +0000 UTC m=+1335.612230139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift") pod "swift-storage-0" (UID: "35384be4-c73c-470d-b602-c512b41bd815") : configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.526861 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9s99\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-kube-api-access-g9s99\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.528153 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592163 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592218 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnvnr\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-kube-api-access-lnvnr\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592237 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-cache\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592251 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-lock\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592277 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkkv4\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-kube-api-access-dkkv4\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592309 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-log-httpd\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592330 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-run-httpd\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592343 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-cache\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592361 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d78d54f-aba4-479e-b166-73a91e7388b6-config-data\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592374 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592415 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592429 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592449 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldcm9\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-kube-api-access-ldcm9\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592470 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-lock\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592485 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.592619 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") device mount path \"/mnt/openstack/pv12\"" pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.593282 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-run-httpd\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.593487 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.593927 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") device mount path \"/mnt/openstack/pv02\"" pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.593612 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-cache\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.593853 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-cache\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.593858 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-lock\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.593897 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-log-httpd\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.593938 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.594108 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift podName:61a7bea6-d18a-4633-b837-5ef827cd7f93 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:27.094090812 +0000 UTC m=+1335.709583685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift") pod "swift-storage-1" (UID: "61a7bea6-d18a-4633-b837-5ef827cd7f93") : configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.593552 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.594143 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.594203 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift podName:9e445fdb-b8aa-47ba-9e59-febb527bf622 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:27.094172294 +0000 UTC m=+1335.709665177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift") pod "swift-storage-2" (UID: "9e445fdb-b8aa-47ba-9e59-febb527bf622") : configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.593603 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-lock\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.593672 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.594242 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv: configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.594264 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift podName:7d78d54f-aba4-479e-b166-73a91e7388b6 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:27.094257986 +0000 UTC m=+1335.709750869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift") pod "swift-proxy-67f6cc5479-jd8vv" (UID: "7d78d54f-aba4-479e-b166-73a91e7388b6") : configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.612049 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldcm9\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-kube-api-access-ldcm9\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.612348 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d78d54f-aba4-479e-b166-73a91e7388b6-config-data\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.612453 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.612954 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnvnr\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-kube-api-access-lnvnr\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.614863 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkkv4\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-kube-api-access-dkkv4\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.622977 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:26 crc kubenswrapper[4810]: I0110 07:08:26.997679 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.997931 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.997971 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:08:26 crc kubenswrapper[4810]: E0110 07:08:26.998031 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift podName:35384be4-c73c-470d-b602-c512b41bd815 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:27.9980079 +0000 UTC m=+1336.613500853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift") pod "swift-storage-0" (UID: "35384be4-c73c-470d-b602-c512b41bd815") : configmap "swift-ring-files" not found Jan 10 07:08:27 crc kubenswrapper[4810]: I0110 07:08:27.099453 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:27 crc kubenswrapper[4810]: I0110 07:08:27.099552 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:27 crc kubenswrapper[4810]: I0110 07:08:27.099631 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:27 crc kubenswrapper[4810]: E0110 07:08:27.099701 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:27 crc kubenswrapper[4810]: E0110 07:08:27.099755 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 10 07:08:27 crc kubenswrapper[4810]: E0110 07:08:27.099782 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:27 crc kubenswrapper[4810]: E0110 07:08:27.099783 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:27 crc kubenswrapper[4810]: E0110 07:08:27.099818 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 10 07:08:27 crc kubenswrapper[4810]: E0110 07:08:27.099846 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift podName:9e445fdb-b8aa-47ba-9e59-febb527bf622 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:28.099809682 +0000 UTC m=+1336.715302645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift") pod "swift-storage-2" (UID: "9e445fdb-b8aa-47ba-9e59-febb527bf622") : configmap "swift-ring-files" not found Jan 10 07:08:27 crc kubenswrapper[4810]: E0110 07:08:27.099803 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv: configmap "swift-ring-files" not found Jan 10 07:08:27 crc kubenswrapper[4810]: E0110 07:08:27.099880 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift podName:61a7bea6-d18a-4633-b837-5ef827cd7f93 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:28.099864643 +0000 UTC m=+1336.715357646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift") pod "swift-storage-1" (UID: "61a7bea6-d18a-4633-b837-5ef827cd7f93") : configmap "swift-ring-files" not found Jan 10 07:08:27 crc kubenswrapper[4810]: E0110 07:08:27.099909 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift podName:7d78d54f-aba4-479e-b166-73a91e7388b6 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:28.099895354 +0000 UTC m=+1336.715388357 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift") pod "swift-proxy-67f6cc5479-jd8vv" (UID: "7d78d54f-aba4-479e-b166-73a91e7388b6") : configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: I0110 07:08:28.012689 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.012871 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.013143 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.013211 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift podName:35384be4-c73c-470d-b602-c512b41bd815 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:30.013178139 +0000 UTC m=+1338.628671022 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift") pod "swift-storage-0" (UID: "35384be4-c73c-470d-b602-c512b41bd815") : configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: I0110 07:08:28.114175 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:28 crc kubenswrapper[4810]: I0110 07:08:28.114337 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:28 crc kubenswrapper[4810]: I0110 07:08:28.114432 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.114583 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.114610 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv: configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.114627 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.114655 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.114660 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift podName:7d78d54f-aba4-479e-b166-73a91e7388b6 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:30.114643133 +0000 UTC m=+1338.730136026 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift") pod "swift-proxy-67f6cc5479-jd8vv" (UID: "7d78d54f-aba4-479e-b166-73a91e7388b6") : configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.114714 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift podName:9e445fdb-b8aa-47ba-9e59-febb527bf622 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:30.114692634 +0000 UTC m=+1338.730185577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift") pod "swift-storage-2" (UID: "9e445fdb-b8aa-47ba-9e59-febb527bf622") : configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.114778 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.114788 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 10 07:08:28 crc kubenswrapper[4810]: E0110 07:08:28.114812 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift podName:61a7bea6-d18a-4633-b837-5ef827cd7f93 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:30.114803347 +0000 UTC m=+1338.730296240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift") pod "swift-storage-1" (UID: "61a7bea6-d18a-4633-b837-5ef827cd7f93") : configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.041719 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.041931 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.042290 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.042397 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift podName:35384be4-c73c-470d-b602-c512b41bd815 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:34.04237551 +0000 UTC m=+1342.657868383 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift") pod "swift-storage-0" (UID: "35384be4-c73c-470d-b602-c512b41bd815") : configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.119973 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-89lnm"] Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.122081 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.125408 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.125550 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.131219 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-89lnm"] Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.143666 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.143836 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.143956 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.144257 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.144300 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.144382 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift podName:9e445fdb-b8aa-47ba-9e59-febb527bf622 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:34.144353855 +0000 UTC m=+1342.759846778 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift") pod "swift-storage-2" (UID: "9e445fdb-b8aa-47ba-9e59-febb527bf622") : configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.144515 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.144604 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.144712 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift podName:61a7bea6-d18a-4633-b837-5ef827cd7f93 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:34.144692434 +0000 UTC m=+1342.760185337 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift") pod "swift-storage-1" (UID: "61a7bea6-d18a-4633-b837-5ef827cd7f93") : configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.144528 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.144869 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv: configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: E0110 07:08:30.144969 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift podName:7d78d54f-aba4-479e-b166-73a91e7388b6 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:34.14495688 +0000 UTC m=+1342.760449853 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift") pod "swift-proxy-67f6cc5479-jd8vv" (UID: "7d78d54f-aba4-479e-b166-73a91e7388b6") : configmap "swift-ring-files" not found Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.245487 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-swiftconf\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.245570 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfvbh\" (UniqueName: \"kubernetes.io/projected/5274bd30-f077-4ca0-98a2-d9eb9911f74c-kube-api-access-mfvbh\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.245612 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-scripts\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.245645 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5274bd30-f077-4ca0-98a2-d9eb9911f74c-etc-swift\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.245669 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-dispersionconf\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.245687 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-ring-data-devices\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.347942 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-swiftconf\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.348051 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfvbh\" (UniqueName: \"kubernetes.io/projected/5274bd30-f077-4ca0-98a2-d9eb9911f74c-kube-api-access-mfvbh\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.348092 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-scripts\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.348130 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5274bd30-f077-4ca0-98a2-d9eb9911f74c-etc-swift\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.348165 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-dispersionconf\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.348239 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-ring-data-devices\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.348970 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5274bd30-f077-4ca0-98a2-d9eb9911f74c-etc-swift\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.349346 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-scripts\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.349746 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-ring-data-devices\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.354249 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-swiftconf\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.357571 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-dispersionconf\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.366126 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfvbh\" (UniqueName: \"kubernetes.io/projected/5274bd30-f077-4ca0-98a2-d9eb9911f74c-kube-api-access-mfvbh\") pod \"swift-ring-rebalance-89lnm\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.444909 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:30 crc kubenswrapper[4810]: I0110 07:08:30.670848 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-89lnm"] Jan 10 07:08:31 crc kubenswrapper[4810]: I0110 07:08:31.173855 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" event={"ID":"5274bd30-f077-4ca0-98a2-d9eb9911f74c","Type":"ContainerStarted","Data":"4d3c07fade99781031f480acea8bd415266e89ece0282f29137d152599574a6b"} Jan 10 07:08:31 crc kubenswrapper[4810]: I0110 07:08:31.173909 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" event={"ID":"5274bd30-f077-4ca0-98a2-d9eb9911f74c","Type":"ContainerStarted","Data":"53e1c99c8071ec2a4fa5a7833016cc874f5a1561792d577c9f9d7a0a7386bfbc"} Jan 10 07:08:31 crc kubenswrapper[4810]: I0110 07:08:31.196531 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" podStartSLOduration=1.196506698 podStartE2EDuration="1.196506698s" podCreationTimestamp="2026-01-10 07:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:08:31.189626733 +0000 UTC m=+1339.805119646" watchObservedRunningTime="2026-01-10 07:08:31.196506698 +0000 UTC m=+1339.811999601" Jan 10 07:08:34 crc kubenswrapper[4810]: I0110 07:08:34.110396 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.110639 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.111045 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.111177 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift podName:35384be4-c73c-470d-b602-c512b41bd815 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:42.111091378 +0000 UTC m=+1350.726584271 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift") pod "swift-storage-0" (UID: "35384be4-c73c-470d-b602-c512b41bd815") : configmap "swift-ring-files" not found Jan 10 07:08:34 crc kubenswrapper[4810]: I0110 07:08:34.212637 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:34 crc kubenswrapper[4810]: I0110 07:08:34.212729 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:34 crc kubenswrapper[4810]: I0110 07:08:34.212783 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.212944 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.212962 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-1: configmap "swift-ring-files" not found Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.213014 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift podName:61a7bea6-d18a-4633-b837-5ef827cd7f93 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:42.212996701 +0000 UTC m=+1350.828489584 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift") pod "swift-storage-1" (UID: "61a7bea6-d18a-4633-b837-5ef827cd7f93") : configmap "swift-ring-files" not found Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.213465 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.213486 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv: configmap "swift-ring-files" not found Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.213515 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift podName:7d78d54f-aba4-479e-b166-73a91e7388b6 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:42.213505504 +0000 UTC m=+1350.828998387 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift") pod "swift-proxy-67f6cc5479-jd8vv" (UID: "7d78d54f-aba4-479e-b166-73a91e7388b6") : configmap "swift-ring-files" not found Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.213571 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.213579 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-2: configmap "swift-ring-files" not found Jan 10 07:08:34 crc kubenswrapper[4810]: E0110 07:08:34.213600 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift podName:9e445fdb-b8aa-47ba-9e59-febb527bf622 nodeName:}" failed. No retries permitted until 2026-01-10 07:08:42.213592976 +0000 UTC m=+1350.829085859 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift") pod "swift-storage-2" (UID: "9e445fdb-b8aa-47ba-9e59-febb527bf622") : configmap "swift-ring-files" not found Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.130837 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.141405 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift\") pod \"swift-storage-0\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.199631 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.237322 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.237444 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.238607 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.244642 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift\") pod \"swift-storage-1\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.245866 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift\") pod \"swift-storage-2\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.246909 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift\") pod \"swift-proxy-67f6cc5479-jd8vv\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.257221 4810 generic.go:334] "Generic (PLEG): container finished" podID="5274bd30-f077-4ca0-98a2-d9eb9911f74c" containerID="4d3c07fade99781031f480acea8bd415266e89ece0282f29137d152599574a6b" exitCode=0 Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.257618 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" event={"ID":"5274bd30-f077-4ca0-98a2-d9eb9911f74c","Type":"ContainerDied","Data":"4d3c07fade99781031f480acea8bd415266e89ece0282f29137d152599574a6b"} Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.262319 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.342039 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.520621 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.597762 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv"] Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.670412 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:08:42 crc kubenswrapper[4810]: W0110 07:08:42.678567 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35384be4_c73c_470d_b602_c512b41bd815.slice/crio-6def7094e0e85095f5bddb3a27003ee81919a5973bb107b2393149277d188daa WatchSource:0}: Error finding container 6def7094e0e85095f5bddb3a27003ee81919a5973bb107b2393149277d188daa: Status 404 returned error can't find the container with id 6def7094e0e85095f5bddb3a27003ee81919a5973bb107b2393149277d188daa Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.750117 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:08:42 crc kubenswrapper[4810]: I0110 07:08:42.763963 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.269155 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"69f5e1b55ddd8a51cdb10d86003a82dc34ddba8a939304e310aed63bd32a5a88"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.269508 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"46c2fb1d26e030ce2fcaa847f01b470f0f0dcad887a3b14873a99c0d1d15fcc9"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.269519 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"c52dccb808b6c0603000db78a7cc13b79e8b9ad8a12dd1fe8f81ba24d0938195"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.271022 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"5ba722e037cb33f2303c9f6a818e040929134e01e825bb2252f52bcfb087ef9e"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.271043 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"c1111cc844cf46710a0ea506165eb6155bcab772504569b1c245a742a45436ed"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.271054 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"6def7094e0e85095f5bddb3a27003ee81919a5973bb107b2393149277d188daa"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.273312 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"84afb59822e91019a7cfddbd9dbd110a055508aa456c4e1cdd0db42594944483"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.273359 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"2c66cdc4afdd5a61714f8fd228aa35ee834437c05a64f32ab8269659e3053417"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.273374 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"8100347cec1272dd3604005cb10798d290c7d1bb3eca3e0e33462080e6f2a923"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.282893 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" event={"ID":"7d78d54f-aba4-479e-b166-73a91e7388b6","Type":"ContainerStarted","Data":"a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.282970 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" event={"ID":"7d78d54f-aba4-479e-b166-73a91e7388b6","Type":"ContainerStarted","Data":"4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.282982 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" event={"ID":"7d78d54f-aba4-479e-b166-73a91e7388b6","Type":"ContainerStarted","Data":"453b8be06705eb7f37ce4b106ddb12a73478f623786b1b31eedb3dc21a8c85bf"} Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.283059 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.283095 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.303870 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" podStartSLOduration=17.3038499 podStartE2EDuration="17.3038499s" podCreationTimestamp="2026-01-10 07:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:08:43.302744784 +0000 UTC m=+1351.918237677" watchObservedRunningTime="2026-01-10 07:08:43.3038499 +0000 UTC m=+1351.919342783" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.643696 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.685650 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-swiftconf\") pod \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.685693 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-scripts\") pod \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.685723 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-ring-data-devices\") pod \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.685784 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-dispersionconf\") pod \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.685812 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfvbh\" (UniqueName: \"kubernetes.io/projected/5274bd30-f077-4ca0-98a2-d9eb9911f74c-kube-api-access-mfvbh\") pod \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.685881 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5274bd30-f077-4ca0-98a2-d9eb9911f74c-etc-swift\") pod \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\" (UID: \"5274bd30-f077-4ca0-98a2-d9eb9911f74c\") " Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.688089 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5274bd30-f077-4ca0-98a2-d9eb9911f74c" (UID: "5274bd30-f077-4ca0-98a2-d9eb9911f74c"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.688273 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5274bd30-f077-4ca0-98a2-d9eb9911f74c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5274bd30-f077-4ca0-98a2-d9eb9911f74c" (UID: "5274bd30-f077-4ca0-98a2-d9eb9911f74c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.708281 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5274bd30-f077-4ca0-98a2-d9eb9911f74c-kube-api-access-mfvbh" (OuterVolumeSpecName: "kube-api-access-mfvbh") pod "5274bd30-f077-4ca0-98a2-d9eb9911f74c" (UID: "5274bd30-f077-4ca0-98a2-d9eb9911f74c"). InnerVolumeSpecName "kube-api-access-mfvbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.723654 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-scripts" (OuterVolumeSpecName: "scripts") pod "5274bd30-f077-4ca0-98a2-d9eb9911f74c" (UID: "5274bd30-f077-4ca0-98a2-d9eb9911f74c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.748597 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5274bd30-f077-4ca0-98a2-d9eb9911f74c" (UID: "5274bd30-f077-4ca0-98a2-d9eb9911f74c"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.751407 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5274bd30-f077-4ca0-98a2-d9eb9911f74c" (UID: "5274bd30-f077-4ca0-98a2-d9eb9911f74c"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.789734 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.789782 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.789793 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfvbh\" (UniqueName: \"kubernetes.io/projected/5274bd30-f077-4ca0-98a2-d9eb9911f74c-kube-api-access-mfvbh\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.789804 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5274bd30-f077-4ca0-98a2-d9eb9911f74c-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.789816 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5274bd30-f077-4ca0-98a2-d9eb9911f74c-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:43 crc kubenswrapper[4810]: I0110 07:08:43.789825 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5274bd30-f077-4ca0-98a2-d9eb9911f74c-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.292440 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" event={"ID":"5274bd30-f077-4ca0-98a2-d9eb9911f74c","Type":"ContainerDied","Data":"53e1c99c8071ec2a4fa5a7833016cc874f5a1561792d577c9f9d7a0a7386bfbc"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.292745 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53e1c99c8071ec2a4fa5a7833016cc874f5a1561792d577c9f9d7a0a7386bfbc" Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.292481 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-89lnm" Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.315034 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"7052b79c7eae71345ad32ba5b42631d665cb221acf4b4ce24dd0f0b6a78f5327"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.315079 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"df4c479583608d3794221071118c08399d0bde969d4b58019106c8359384cca4"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.315089 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"9db8ebaaaf644355d90527d55089b19b151e881cc106c970e3ee487545b49d8c"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.315099 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"86d23f9c56770a8205ea910f34a2ca1faca100f56c90beba359935e03de5b021"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.339669 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"8e1648f94f1d40379cbd3a2dad450cb4b4e0bdea3952d813b6d45addeb42a28c"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.339716 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"eace1af34715086870b5562438d03905f272a5cf5775cbcfb0d1ce9ae7908ed9"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.339726 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"7974e13c559cc2b816f58026c7024f52424757b650c7c2e9ce4bc885fc426988"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.339734 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"d0908efd399824837fa74784a6c8be146f30f6787df69e0e2eacd645349b4c3f"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.344167 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"42b3307435cff9c1f27540d516def8fd0fbee9afd64ea83f8e3a8ff2cab5df46"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.344230 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"d69ba71fd10314c5787c30859c699e56fb1b3353cd489f56afa4c8748442f43a"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.344247 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"574db38430e62e4aab8be062c769da593c603c0e95d9fc45c3a9dd23efab3a37"} Jan 10 07:08:44 crc kubenswrapper[4810]: I0110 07:08:44.344258 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"9cab011a8fd4542f33632c7a9896800e7d8d686813617d394f6c871821f3606d"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.379210 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"59b21531ecf41d1a832ced35c76fa6db9f76208c6e3d229c62ad0c507bc75c98"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.379480 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"b6c287d40f43d63f67a964a51f7fc2e29f639796d4a3d3110277e4ad469c1c18"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.379492 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"177712569c8e54981e296a8fc80f573500c49b4e2e7ccf627fca78e1f773aee8"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.379500 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"fe930c121fe712a087a30b0efb01808b19ce1333560bec981c45b3bf124821e0"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.379508 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"42b52b1f8e2388cb2ad202707e9c694223b0076895bb2e97df9176fbde8e9668"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.386066 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"f6e45c8ddea0e157a1fef4618cded3737bdca9eb26c9af4e9d8083081469d20e"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.386106 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"9078d5b74e451da392057b58de47907cc58742c708a8e1e76ce7eb99f277b3a7"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.386116 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"320b26dc3d937e8435175c954afaac3c639220fb5d43294acd63f95f85f8fa6a"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.386124 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"1da9de7dbe13392b022f17fbb43eb6c586514ce892ee55e0b435a87430e6c66c"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.392569 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"86f9b6878fa02ee168b75c2756f8beadfcb6e1db40ce87d0ae728d68b22e51b8"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.392615 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"8d507a636254969000b1a7d908e3078b665671a54ca2bbf1dcc69a0ff5fe30d3"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.392653 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"58c88df8ef32e9de52d7ea468066e46db1bb32032ee27f37fb4d7e896f25ef9b"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.392663 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"611e688e5612b670838a8251c8236d9cb03d1f9926e1a1eda2057acc7351e116"} Jan 10 07:08:45 crc kubenswrapper[4810]: I0110 07:08:45.392671 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"b5002e16aea7e39d32f61224d814375b9e9526efc9e4ade0a3ee92dedccd44e9"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.408953 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"58c20fd7070fac4240a876670259ff2f0a1e879f03e7ea2997db9e3f50a020ec"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.409026 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"68be7dd14ca951bda2841d63d9c2cf81a5121460ad1eecf2cc07d803d2973c58"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.409041 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"2dd5918bcf687a07d9affd5c725339df23b478f00a41846011ae755f155f839f"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.409054 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerStarted","Data":"89d190dd57a30b5bc0c517fd2472a5b6132bf00727d87123b266ce6b5a89d12f"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.416469 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"d480b12f49a977ac6afc39cef6e3be30b4602b0a91b4d56b72cf25e064279b5a"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.416516 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"408caa90be1a5af927636343526e8a819336837225bdd13c863ef2608b9af13c"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.416526 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"a870c8642b348c6216f052eb0d1f2a4a66fa9b94c5223d449cbfaf7af7284bdb"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.416535 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"a831b4dbfe9d65625e504e2bd3674ba9902a7aca2a65a9abb397fc0e4bba632b"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.416546 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerStarted","Data":"16cdd481355f9f1be103a71f3edf4df6da41b172997ae9fc8eea38fa779d1244"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.423146 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"655261d3f533e4de50c1f16a943470775f19f69516b1d0e9a38f3f14f56950a0"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.423216 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"83a92983eb8a928938627b9a680c5f1c500d74c6035efc206d01c1d69c9607d3"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.423235 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"f7125dc757f268fc55fbaebe9eb9c73691030da036764248d74fed53467dd548"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.423251 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerStarted","Data":"c18c18bfba43ae0ada590014a21ca1650869e9bfa5ca9f6bf8445cc2708fbf5e"} Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.457097 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=21.457057109 podStartE2EDuration="21.457057109s" podCreationTimestamp="2026-01-10 07:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:08:46.452809518 +0000 UTC m=+1355.068302401" watchObservedRunningTime="2026-01-10 07:08:46.457057109 +0000 UTC m=+1355.072550002" Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.492096 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-2" podStartSLOduration=21.492069246 podStartE2EDuration="21.492069246s" podCreationTimestamp="2026-01-10 07:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:08:46.487428635 +0000 UTC m=+1355.102921528" watchObservedRunningTime="2026-01-10 07:08:46.492069246 +0000 UTC m=+1355.107562149" Jan 10 07:08:46 crc kubenswrapper[4810]: I0110 07:08:46.531653 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-1" podStartSLOduration=21.531634441 podStartE2EDuration="21.531634441s" podCreationTimestamp="2026-01-10 07:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:08:46.528048516 +0000 UTC m=+1355.143541399" watchObservedRunningTime="2026-01-10 07:08:46.531634441 +0000 UTC m=+1355.147127324" Jan 10 07:08:47 crc kubenswrapper[4810]: I0110 07:08:47.349054 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:52 crc kubenswrapper[4810]: I0110 07:08:52.345323 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.349145 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7"] Jan 10 07:08:53 crc kubenswrapper[4810]: E0110 07:08:53.349416 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5274bd30-f077-4ca0-98a2-d9eb9911f74c" containerName="swift-ring-rebalance" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.349428 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5274bd30-f077-4ca0-98a2-d9eb9911f74c" containerName="swift-ring-rebalance" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.349584 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5274bd30-f077-4ca0-98a2-d9eb9911f74c" containerName="swift-ring-rebalance" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.350040 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.353189 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.353620 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.370934 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7"] Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.443057 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-swiftconf\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.443112 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-dispersionconf\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.443149 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-scripts\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.443176 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8d84cb4d-d948-4681-b6e4-95688d6e49af-etc-swift\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.443220 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzgr6\" (UniqueName: \"kubernetes.io/projected/8d84cb4d-d948-4681-b6e4-95688d6e49af-kube-api-access-fzgr6\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.443237 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-ring-data-devices\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.544727 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-dispersionconf\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.544792 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-scripts\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.544821 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8d84cb4d-d948-4681-b6e4-95688d6e49af-etc-swift\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.544847 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzgr6\" (UniqueName: \"kubernetes.io/projected/8d84cb4d-d948-4681-b6e4-95688d6e49af-kube-api-access-fzgr6\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.544866 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-ring-data-devices\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.544923 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-swiftconf\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.545557 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8d84cb4d-d948-4681-b6e4-95688d6e49af-etc-swift\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.545723 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-scripts\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.545732 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-ring-data-devices\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.551316 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-dispersionconf\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.552725 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-swiftconf\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.567989 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzgr6\" (UniqueName: \"kubernetes.io/projected/8d84cb4d-d948-4681-b6e4-95688d6e49af-kube-api-access-fzgr6\") pod \"swift-ring-rebalance-debug-7j9g7\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:53 crc kubenswrapper[4810]: I0110 07:08:53.676806 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:54 crc kubenswrapper[4810]: I0110 07:08:54.174308 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7"] Jan 10 07:08:54 crc kubenswrapper[4810]: W0110 07:08:54.179134 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d84cb4d_d948_4681_b6e4_95688d6e49af.slice/crio-b21fb150df3d20789886527c2c2e73b9b5d887d9cd65227b214cf5f6ce7de804 WatchSource:0}: Error finding container b21fb150df3d20789886527c2c2e73b9b5d887d9cd65227b214cf5f6ce7de804: Status 404 returned error can't find the container with id b21fb150df3d20789886527c2c2e73b9b5d887d9cd65227b214cf5f6ce7de804 Jan 10 07:08:54 crc kubenswrapper[4810]: I0110 07:08:54.493149 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" event={"ID":"8d84cb4d-d948-4681-b6e4-95688d6e49af","Type":"ContainerStarted","Data":"b21fb150df3d20789886527c2c2e73b9b5d887d9cd65227b214cf5f6ce7de804"} Jan 10 07:08:55 crc kubenswrapper[4810]: I0110 07:08:55.500447 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" event={"ID":"8d84cb4d-d948-4681-b6e4-95688d6e49af","Type":"ContainerStarted","Data":"860f0c514064154339f307f0e5c5a236022aa3932902ce8237f0e3822515f59c"} Jan 10 07:08:55 crc kubenswrapper[4810]: I0110 07:08:55.519262 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" podStartSLOduration=2.519234315 podStartE2EDuration="2.519234315s" podCreationTimestamp="2026-01-10 07:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:08:55.51905363 +0000 UTC m=+1364.134546513" watchObservedRunningTime="2026-01-10 07:08:55.519234315 +0000 UTC m=+1364.134727228" Jan 10 07:08:57 crc kubenswrapper[4810]: I0110 07:08:57.515345 4810 generic.go:334] "Generic (PLEG): container finished" podID="8d84cb4d-d948-4681-b6e4-95688d6e49af" containerID="860f0c514064154339f307f0e5c5a236022aa3932902ce8237f0e3822515f59c" exitCode=0 Jan 10 07:08:57 crc kubenswrapper[4810]: I0110 07:08:57.515435 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" event={"ID":"8d84cb4d-d948-4681-b6e4-95688d6e49af","Type":"ContainerDied","Data":"860f0c514064154339f307f0e5c5a236022aa3932902ce8237f0e3822515f59c"} Jan 10 07:08:58 crc kubenswrapper[4810]: I0110 07:08:58.885616 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:58 crc kubenswrapper[4810]: I0110 07:08:58.931951 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7"] Jan 10 07:08:58 crc kubenswrapper[4810]: I0110 07:08:58.939697 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7"] Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.028988 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-swiftconf\") pod \"8d84cb4d-d948-4681-b6e4-95688d6e49af\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.029039 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8d84cb4d-d948-4681-b6e4-95688d6e49af-etc-swift\") pod \"8d84cb4d-d948-4681-b6e4-95688d6e49af\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.029082 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-ring-data-devices\") pod \"8d84cb4d-d948-4681-b6e4-95688d6e49af\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.029168 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzgr6\" (UniqueName: \"kubernetes.io/projected/8d84cb4d-d948-4681-b6e4-95688d6e49af-kube-api-access-fzgr6\") pod \"8d84cb4d-d948-4681-b6e4-95688d6e49af\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.029250 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-scripts\") pod \"8d84cb4d-d948-4681-b6e4-95688d6e49af\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.029299 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-dispersionconf\") pod \"8d84cb4d-d948-4681-b6e4-95688d6e49af\" (UID: \"8d84cb4d-d948-4681-b6e4-95688d6e49af\") " Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.031320 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8d84cb4d-d948-4681-b6e4-95688d6e49af" (UID: "8d84cb4d-d948-4681-b6e4-95688d6e49af"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.033848 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d84cb4d-d948-4681-b6e4-95688d6e49af-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8d84cb4d-d948-4681-b6e4-95688d6e49af" (UID: "8d84cb4d-d948-4681-b6e4-95688d6e49af"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.039462 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d84cb4d-d948-4681-b6e4-95688d6e49af-kube-api-access-fzgr6" (OuterVolumeSpecName: "kube-api-access-fzgr6") pod "8d84cb4d-d948-4681-b6e4-95688d6e49af" (UID: "8d84cb4d-d948-4681-b6e4-95688d6e49af"). InnerVolumeSpecName "kube-api-access-fzgr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.055155 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8d84cb4d-d948-4681-b6e4-95688d6e49af" (UID: "8d84cb4d-d948-4681-b6e4-95688d6e49af"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.077612 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8d84cb4d-d948-4681-b6e4-95688d6e49af" (UID: "8d84cb4d-d948-4681-b6e4-95688d6e49af"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.080695 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-scripts" (OuterVolumeSpecName: "scripts") pod "8d84cb4d-d948-4681-b6e4-95688d6e49af" (UID: "8d84cb4d-d948-4681-b6e4-95688d6e49af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.087460 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v"] Jan 10 07:08:59 crc kubenswrapper[4810]: E0110 07:08:59.087815 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d84cb4d-d948-4681-b6e4-95688d6e49af" containerName="swift-ring-rebalance" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.087836 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d84cb4d-d948-4681-b6e4-95688d6e49af" containerName="swift-ring-rebalance" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.087997 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d84cb4d-d948-4681-b6e4-95688d6e49af" containerName="swift-ring-rebalance" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.088580 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.098499 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v"] Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.130442 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzgr6\" (UniqueName: \"kubernetes.io/projected/8d84cb4d-d948-4681-b6e4-95688d6e49af-kube-api-access-fzgr6\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.130480 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.130491 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.130499 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8d84cb4d-d948-4681-b6e4-95688d6e49af-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.130508 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8d84cb4d-d948-4681-b6e4-95688d6e49af-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.130517 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8d84cb4d-d948-4681-b6e4-95688d6e49af-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.232840 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-dispersionconf\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.232901 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q2xc\" (UniqueName: \"kubernetes.io/projected/55528e97-23c6-4ae7-8066-0606d0b33cc2-kube-api-access-9q2xc\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.233050 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-swiftconf\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.233121 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55528e97-23c6-4ae7-8066-0606d0b33cc2-etc-swift\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.233285 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-scripts\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.233394 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-ring-data-devices\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.334882 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55528e97-23c6-4ae7-8066-0606d0b33cc2-etc-swift\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.334959 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-scripts\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.334990 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-ring-data-devices\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.335027 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-dispersionconf\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.335042 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q2xc\" (UniqueName: \"kubernetes.io/projected/55528e97-23c6-4ae7-8066-0606d0b33cc2-kube-api-access-9q2xc\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.335074 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-swiftconf\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.336087 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-scripts\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.336800 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-ring-data-devices\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.337706 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55528e97-23c6-4ae7-8066-0606d0b33cc2-etc-swift\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.338539 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-swiftconf\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.338720 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-dispersionconf\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.357264 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q2xc\" (UniqueName: \"kubernetes.io/projected/55528e97-23c6-4ae7-8066-0606d0b33cc2-kube-api-access-9q2xc\") pod \"swift-ring-rebalance-debug-7qn9v\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.443740 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.540718 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21fb150df3d20789886527c2c2e73b9b5d887d9cd65227b214cf5f6ce7de804" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.540857 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7j9g7" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.716275 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d84cb4d-d948-4681-b6e4-95688d6e49af" path="/var/lib/kubelet/pods/8d84cb4d-d948-4681-b6e4-95688d6e49af/volumes" Jan 10 07:08:59 crc kubenswrapper[4810]: I0110 07:08:59.726204 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v"] Jan 10 07:08:59 crc kubenswrapper[4810]: W0110 07:08:59.733901 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55528e97_23c6_4ae7_8066_0606d0b33cc2.slice/crio-2770bfa2a5d22d996199cf25199fc337f60fa4d5695b605164207cb5b05bebb7 WatchSource:0}: Error finding container 2770bfa2a5d22d996199cf25199fc337f60fa4d5695b605164207cb5b05bebb7: Status 404 returned error can't find the container with id 2770bfa2a5d22d996199cf25199fc337f60fa4d5695b605164207cb5b05bebb7 Jan 10 07:09:00 crc kubenswrapper[4810]: I0110 07:09:00.551834 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" event={"ID":"55528e97-23c6-4ae7-8066-0606d0b33cc2","Type":"ContainerStarted","Data":"589f268af0d9472143b2ab83467e2d4afc9fab34e7280357abbbddbf598c305f"} Jan 10 07:09:00 crc kubenswrapper[4810]: I0110 07:09:00.552155 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" event={"ID":"55528e97-23c6-4ae7-8066-0606d0b33cc2","Type":"ContainerStarted","Data":"2770bfa2a5d22d996199cf25199fc337f60fa4d5695b605164207cb5b05bebb7"} Jan 10 07:09:00 crc kubenswrapper[4810]: I0110 07:09:00.582098 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" podStartSLOduration=1.582063078 podStartE2EDuration="1.582063078s" podCreationTimestamp="2026-01-10 07:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:09:00.572002167 +0000 UTC m=+1369.187495060" watchObservedRunningTime="2026-01-10 07:09:00.582063078 +0000 UTC m=+1369.197556001" Jan 10 07:09:01 crc kubenswrapper[4810]: I0110 07:09:01.564968 4810 generic.go:334] "Generic (PLEG): container finished" podID="55528e97-23c6-4ae7-8066-0606d0b33cc2" containerID="589f268af0d9472143b2ab83467e2d4afc9fab34e7280357abbbddbf598c305f" exitCode=0 Jan 10 07:09:01 crc kubenswrapper[4810]: I0110 07:09:01.565013 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" event={"ID":"55528e97-23c6-4ae7-8066-0606d0b33cc2","Type":"ContainerDied","Data":"589f268af0d9472143b2ab83467e2d4afc9fab34e7280357abbbddbf598c305f"} Jan 10 07:09:02 crc kubenswrapper[4810]: I0110 07:09:02.878071 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:09:02 crc kubenswrapper[4810]: I0110 07:09:02.918636 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v"] Jan 10 07:09:02 crc kubenswrapper[4810]: I0110 07:09:02.928345 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v"] Jan 10 07:09:02 crc kubenswrapper[4810]: I0110 07:09:02.992528 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-ring-data-devices\") pod \"55528e97-23c6-4ae7-8066-0606d0b33cc2\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " Jan 10 07:09:02 crc kubenswrapper[4810]: I0110 07:09:02.992635 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-swiftconf\") pod \"55528e97-23c6-4ae7-8066-0606d0b33cc2\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " Jan 10 07:09:02 crc kubenswrapper[4810]: I0110 07:09:02.992663 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55528e97-23c6-4ae7-8066-0606d0b33cc2-etc-swift\") pod \"55528e97-23c6-4ae7-8066-0606d0b33cc2\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " Jan 10 07:09:02 crc kubenswrapper[4810]: I0110 07:09:02.992714 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-dispersionconf\") pod \"55528e97-23c6-4ae7-8066-0606d0b33cc2\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " Jan 10 07:09:02 crc kubenswrapper[4810]: I0110 07:09:02.992755 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q2xc\" (UniqueName: \"kubernetes.io/projected/55528e97-23c6-4ae7-8066-0606d0b33cc2-kube-api-access-9q2xc\") pod \"55528e97-23c6-4ae7-8066-0606d0b33cc2\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " Jan 10 07:09:02 crc kubenswrapper[4810]: I0110 07:09:02.992811 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-scripts\") pod \"55528e97-23c6-4ae7-8066-0606d0b33cc2\" (UID: \"55528e97-23c6-4ae7-8066-0606d0b33cc2\") " Jan 10 07:09:02 crc kubenswrapper[4810]: I0110 07:09:02.993919 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "55528e97-23c6-4ae7-8066-0606d0b33cc2" (UID: "55528e97-23c6-4ae7-8066-0606d0b33cc2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:02 crc kubenswrapper[4810]: I0110 07:09:02.994259 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55528e97-23c6-4ae7-8066-0606d0b33cc2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "55528e97-23c6-4ae7-8066-0606d0b33cc2" (UID: "55528e97-23c6-4ae7-8066-0606d0b33cc2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.000124 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55528e97-23c6-4ae7-8066-0606d0b33cc2-kube-api-access-9q2xc" (OuterVolumeSpecName: "kube-api-access-9q2xc") pod "55528e97-23c6-4ae7-8066-0606d0b33cc2" (UID: "55528e97-23c6-4ae7-8066-0606d0b33cc2"). InnerVolumeSpecName "kube-api-access-9q2xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.013521 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-scripts" (OuterVolumeSpecName: "scripts") pod "55528e97-23c6-4ae7-8066-0606d0b33cc2" (UID: "55528e97-23c6-4ae7-8066-0606d0b33cc2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.014719 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "55528e97-23c6-4ae7-8066-0606d0b33cc2" (UID: "55528e97-23c6-4ae7-8066-0606d0b33cc2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.015421 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "55528e97-23c6-4ae7-8066-0606d0b33cc2" (UID: "55528e97-23c6-4ae7-8066-0606d0b33cc2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.094227 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.094267 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q2xc\" (UniqueName: \"kubernetes.io/projected/55528e97-23c6-4ae7-8066-0606d0b33cc2-kube-api-access-9q2xc\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.094278 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.094287 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55528e97-23c6-4ae7-8066-0606d0b33cc2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.094295 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55528e97-23c6-4ae7-8066-0606d0b33cc2-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.094303 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55528e97-23c6-4ae7-8066-0606d0b33cc2-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.412514 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7"] Jan 10 07:09:03 crc kubenswrapper[4810]: E0110 07:09:03.412895 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55528e97-23c6-4ae7-8066-0606d0b33cc2" containerName="swift-ring-rebalance" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.412918 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="55528e97-23c6-4ae7-8066-0606d0b33cc2" containerName="swift-ring-rebalance" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.413104 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="55528e97-23c6-4ae7-8066-0606d0b33cc2" containerName="swift-ring-rebalance" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.413722 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.425299 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7"] Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.499833 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-swiftconf\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.499869 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-dispersionconf\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.499891 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpxj8\" (UniqueName: \"kubernetes.io/projected/f81ead3e-4035-4ee9-b028-7e3a08d2644e-kube-api-access-zpxj8\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.500090 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-ring-data-devices\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.501229 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-scripts\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.501337 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f81ead3e-4035-4ee9-b028-7e3a08d2644e-etc-swift\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.580982 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2770bfa2a5d22d996199cf25199fc337f60fa4d5695b605164207cb5b05bebb7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.581045 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-7qn9v" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.602888 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f81ead3e-4035-4ee9-b028-7e3a08d2644e-etc-swift\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.603462 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f81ead3e-4035-4ee9-b028-7e3a08d2644e-etc-swift\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.603527 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-swiftconf\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.604045 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-dispersionconf\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.604103 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpxj8\" (UniqueName: \"kubernetes.io/projected/f81ead3e-4035-4ee9-b028-7e3a08d2644e-kube-api-access-zpxj8\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.604248 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-ring-data-devices\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.604322 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-scripts\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.605206 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-scripts\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.605722 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-ring-data-devices\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.607578 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-dispersionconf\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.614635 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-swiftconf\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.619444 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpxj8\" (UniqueName: \"kubernetes.io/projected/f81ead3e-4035-4ee9-b028-7e3a08d2644e-kube-api-access-zpxj8\") pod \"swift-ring-rebalance-debug-pt8c7\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.701121 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55528e97-23c6-4ae7-8066-0606d0b33cc2" path="/var/lib/kubelet/pods/55528e97-23c6-4ae7-8066-0606d0b33cc2/volumes" Jan 10 07:09:03 crc kubenswrapper[4810]: I0110 07:09:03.731857 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:04 crc kubenswrapper[4810]: W0110 07:09:04.180021 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf81ead3e_4035_4ee9_b028_7e3a08d2644e.slice/crio-2be049d6a1424f917f28367c3b5d984fa6b2537cec0657c20e75ed624bc24d5d WatchSource:0}: Error finding container 2be049d6a1424f917f28367c3b5d984fa6b2537cec0657c20e75ed624bc24d5d: Status 404 returned error can't find the container with id 2be049d6a1424f917f28367c3b5d984fa6b2537cec0657c20e75ed624bc24d5d Jan 10 07:09:04 crc kubenswrapper[4810]: I0110 07:09:04.186722 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7"] Jan 10 07:09:04 crc kubenswrapper[4810]: I0110 07:09:04.591279 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" event={"ID":"f81ead3e-4035-4ee9-b028-7e3a08d2644e","Type":"ContainerStarted","Data":"dbbbf84eae6e223dfa07365e667f7ca09deeb91c3e3f01d0cc2bb1f81d61b9a4"} Jan 10 07:09:04 crc kubenswrapper[4810]: I0110 07:09:04.591332 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" event={"ID":"f81ead3e-4035-4ee9-b028-7e3a08d2644e","Type":"ContainerStarted","Data":"2be049d6a1424f917f28367c3b5d984fa6b2537cec0657c20e75ed624bc24d5d"} Jan 10 07:09:04 crc kubenswrapper[4810]: I0110 07:09:04.612466 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" podStartSLOduration=1.612446833 podStartE2EDuration="1.612446833s" podCreationTimestamp="2026-01-10 07:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:09:04.609616955 +0000 UTC m=+1373.225109838" watchObservedRunningTime="2026-01-10 07:09:04.612446833 +0000 UTC m=+1373.227939736" Jan 10 07:09:06 crc kubenswrapper[4810]: I0110 07:09:06.608331 4810 generic.go:334] "Generic (PLEG): container finished" podID="f81ead3e-4035-4ee9-b028-7e3a08d2644e" containerID="dbbbf84eae6e223dfa07365e667f7ca09deeb91c3e3f01d0cc2bb1f81d61b9a4" exitCode=0 Jan 10 07:09:06 crc kubenswrapper[4810]: I0110 07:09:06.608384 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" event={"ID":"f81ead3e-4035-4ee9-b028-7e3a08d2644e","Type":"ContainerDied","Data":"dbbbf84eae6e223dfa07365e667f7ca09deeb91c3e3f01d0cc2bb1f81d61b9a4"} Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.875503 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.916253 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7"] Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.922840 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7"] Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.971627 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f81ead3e-4035-4ee9-b028-7e3a08d2644e-etc-swift\") pod \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.971676 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-swiftconf\") pod \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.971709 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpxj8\" (UniqueName: \"kubernetes.io/projected/f81ead3e-4035-4ee9-b028-7e3a08d2644e-kube-api-access-zpxj8\") pod \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.971726 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-ring-data-devices\") pod \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.971758 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-dispersionconf\") pod \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.972354 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-scripts\") pod \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\" (UID: \"f81ead3e-4035-4ee9-b028-7e3a08d2644e\") " Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.972658 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f81ead3e-4035-4ee9-b028-7e3a08d2644e" (UID: "f81ead3e-4035-4ee9-b028-7e3a08d2644e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.972878 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f81ead3e-4035-4ee9-b028-7e3a08d2644e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f81ead3e-4035-4ee9-b028-7e3a08d2644e" (UID: "f81ead3e-4035-4ee9-b028-7e3a08d2644e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.977842 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81ead3e-4035-4ee9-b028-7e3a08d2644e-kube-api-access-zpxj8" (OuterVolumeSpecName: "kube-api-access-zpxj8") pod "f81ead3e-4035-4ee9-b028-7e3a08d2644e" (UID: "f81ead3e-4035-4ee9-b028-7e3a08d2644e"). InnerVolumeSpecName "kube-api-access-zpxj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.989879 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-scripts" (OuterVolumeSpecName: "scripts") pod "f81ead3e-4035-4ee9-b028-7e3a08d2644e" (UID: "f81ead3e-4035-4ee9-b028-7e3a08d2644e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:07 crc kubenswrapper[4810]: I0110 07:09:07.997481 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f81ead3e-4035-4ee9-b028-7e3a08d2644e" (UID: "f81ead3e-4035-4ee9-b028-7e3a08d2644e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:08 crc kubenswrapper[4810]: I0110 07:09:08.000463 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f81ead3e-4035-4ee9-b028-7e3a08d2644e" (UID: "f81ead3e-4035-4ee9-b028-7e3a08d2644e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:08 crc kubenswrapper[4810]: I0110 07:09:08.074377 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:08 crc kubenswrapper[4810]: I0110 07:09:08.074411 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f81ead3e-4035-4ee9-b028-7e3a08d2644e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:08 crc kubenswrapper[4810]: I0110 07:09:08.074422 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:08 crc kubenswrapper[4810]: I0110 07:09:08.074433 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpxj8\" (UniqueName: \"kubernetes.io/projected/f81ead3e-4035-4ee9-b028-7e3a08d2644e-kube-api-access-zpxj8\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:08 crc kubenswrapper[4810]: I0110 07:09:08.074443 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f81ead3e-4035-4ee9-b028-7e3a08d2644e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:08 crc kubenswrapper[4810]: I0110 07:09:08.074452 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f81ead3e-4035-4ee9-b028-7e3a08d2644e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:08 crc kubenswrapper[4810]: I0110 07:09:08.624433 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2be049d6a1424f917f28367c3b5d984fa6b2537cec0657c20e75ed624bc24d5d" Jan 10 07:09:08 crc kubenswrapper[4810]: I0110 07:09:08.624537 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-pt8c7" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.103448 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v"] Jan 10 07:09:09 crc kubenswrapper[4810]: E0110 07:09:09.104094 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81ead3e-4035-4ee9-b028-7e3a08d2644e" containerName="swift-ring-rebalance" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.104111 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81ead3e-4035-4ee9-b028-7e3a08d2644e" containerName="swift-ring-rebalance" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.104355 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81ead3e-4035-4ee9-b028-7e3a08d2644e" containerName="swift-ring-rebalance" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.104937 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.107071 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.107577 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.113589 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v"] Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.192762 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-scripts\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.192808 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-swiftconf\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.192905 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fdfw\" (UniqueName: \"kubernetes.io/projected/755f2dca-7808-4892-852a-ef3cfea75806-kube-api-access-2fdfw\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.192940 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-ring-data-devices\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.192989 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-dispersionconf\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.193020 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/755f2dca-7808-4892-852a-ef3cfea75806-etc-swift\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.294229 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fdfw\" (UniqueName: \"kubernetes.io/projected/755f2dca-7808-4892-852a-ef3cfea75806-kube-api-access-2fdfw\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.294289 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-ring-data-devices\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.294322 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-dispersionconf\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.294352 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/755f2dca-7808-4892-852a-ef3cfea75806-etc-swift\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.294399 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-scripts\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.294419 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-swiftconf\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.295096 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/755f2dca-7808-4892-852a-ef3cfea75806-etc-swift\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.295164 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-ring-data-devices\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.295454 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-scripts\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.298678 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-dispersionconf\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.306652 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-swiftconf\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.318743 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fdfw\" (UniqueName: \"kubernetes.io/projected/755f2dca-7808-4892-852a-ef3cfea75806-kube-api-access-2fdfw\") pod \"swift-ring-rebalance-debug-h2v7v\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.424137 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.667921 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v"] Jan 10 07:09:09 crc kubenswrapper[4810]: I0110 07:09:09.704324 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f81ead3e-4035-4ee9-b028-7e3a08d2644e" path="/var/lib/kubelet/pods/f81ead3e-4035-4ee9-b028-7e3a08d2644e/volumes" Jan 10 07:09:10 crc kubenswrapper[4810]: I0110 07:09:10.646151 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" event={"ID":"755f2dca-7808-4892-852a-ef3cfea75806","Type":"ContainerStarted","Data":"235032ea543cc4a23073e4308f38172ff8b77dd0959775e10e3677c574f078ff"} Jan 10 07:09:10 crc kubenswrapper[4810]: I0110 07:09:10.646516 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" event={"ID":"755f2dca-7808-4892-852a-ef3cfea75806","Type":"ContainerStarted","Data":"25e87e2192cefd2aff0ac713293655febaf0b866807272a148f614e4673a5caf"} Jan 10 07:09:10 crc kubenswrapper[4810]: I0110 07:09:10.665991 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" podStartSLOduration=1.665970497 podStartE2EDuration="1.665970497s" podCreationTimestamp="2026-01-10 07:09:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:09:10.664080051 +0000 UTC m=+1379.279572934" watchObservedRunningTime="2026-01-10 07:09:10.665970497 +0000 UTC m=+1379.281463380" Jan 10 07:09:11 crc kubenswrapper[4810]: I0110 07:09:11.661321 4810 generic.go:334] "Generic (PLEG): container finished" podID="755f2dca-7808-4892-852a-ef3cfea75806" containerID="235032ea543cc4a23073e4308f38172ff8b77dd0959775e10e3677c574f078ff" exitCode=0 Jan 10 07:09:11 crc kubenswrapper[4810]: I0110 07:09:11.661656 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" event={"ID":"755f2dca-7808-4892-852a-ef3cfea75806","Type":"ContainerDied","Data":"235032ea543cc4a23073e4308f38172ff8b77dd0959775e10e3677c574f078ff"} Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.015636 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.095900 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-dispersionconf\") pod \"755f2dca-7808-4892-852a-ef3cfea75806\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.095974 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-scripts\") pod \"755f2dca-7808-4892-852a-ef3cfea75806\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.096080 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fdfw\" (UniqueName: \"kubernetes.io/projected/755f2dca-7808-4892-852a-ef3cfea75806-kube-api-access-2fdfw\") pod \"755f2dca-7808-4892-852a-ef3cfea75806\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.096135 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-swiftconf\") pod \"755f2dca-7808-4892-852a-ef3cfea75806\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.096182 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-ring-data-devices\") pod \"755f2dca-7808-4892-852a-ef3cfea75806\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.096286 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/755f2dca-7808-4892-852a-ef3cfea75806-etc-swift\") pod \"755f2dca-7808-4892-852a-ef3cfea75806\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.097637 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755f2dca-7808-4892-852a-ef3cfea75806-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "755f2dca-7808-4892-852a-ef3cfea75806" (UID: "755f2dca-7808-4892-852a-ef3cfea75806"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.098119 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v"] Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.099388 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "755f2dca-7808-4892-852a-ef3cfea75806" (UID: "755f2dca-7808-4892-852a-ef3cfea75806"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.102415 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v"] Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.116068 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755f2dca-7808-4892-852a-ef3cfea75806-kube-api-access-2fdfw" (OuterVolumeSpecName: "kube-api-access-2fdfw") pod "755f2dca-7808-4892-852a-ef3cfea75806" (UID: "755f2dca-7808-4892-852a-ef3cfea75806"). InnerVolumeSpecName "kube-api-access-2fdfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.119161 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "755f2dca-7808-4892-852a-ef3cfea75806" (UID: "755f2dca-7808-4892-852a-ef3cfea75806"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:13 crc kubenswrapper[4810]: E0110 07:09:13.121665 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-scripts podName:755f2dca-7808-4892-852a-ef3cfea75806 nodeName:}" failed. No retries permitted until 2026-01-10 07:09:13.621615752 +0000 UTC m=+1382.237108635 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "scripts" (UniqueName: "kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-scripts") pod "755f2dca-7808-4892-852a-ef3cfea75806" (UID: "755f2dca-7808-4892-852a-ef3cfea75806") : error deleting /var/lib/kubelet/pods/755f2dca-7808-4892-852a-ef3cfea75806/volume-subpaths: remove /var/lib/kubelet/pods/755f2dca-7808-4892-852a-ef3cfea75806/volume-subpaths: no such file or directory Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.124396 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "755f2dca-7808-4892-852a-ef3cfea75806" (UID: "755f2dca-7808-4892-852a-ef3cfea75806"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.198169 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fdfw\" (UniqueName: \"kubernetes.io/projected/755f2dca-7808-4892-852a-ef3cfea75806-kube-api-access-2fdfw\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.198222 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.198233 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.198242 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/755f2dca-7808-4892-852a-ef3cfea75806-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.198273 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/755f2dca-7808-4892-852a-ef3cfea75806-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.678892 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25e87e2192cefd2aff0ac713293655febaf0b866807272a148f614e4673a5caf" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.678981 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-h2v7v" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.705678 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-scripts\") pod \"755f2dca-7808-4892-852a-ef3cfea75806\" (UID: \"755f2dca-7808-4892-852a-ef3cfea75806\") " Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.706256 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-scripts" (OuterVolumeSpecName: "scripts") pod "755f2dca-7808-4892-852a-ef3cfea75806" (UID: "755f2dca-7808-4892-852a-ef3cfea75806"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:13 crc kubenswrapper[4810]: I0110 07:09:13.806754 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/755f2dca-7808-4892-852a-ef3cfea75806-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.242333 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn"] Jan 10 07:09:14 crc kubenswrapper[4810]: E0110 07:09:14.242939 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755f2dca-7808-4892-852a-ef3cfea75806" containerName="swift-ring-rebalance" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.242953 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="755f2dca-7808-4892-852a-ef3cfea75806" containerName="swift-ring-rebalance" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.243108 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="755f2dca-7808-4892-852a-ef3cfea75806" containerName="swift-ring-rebalance" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.245028 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.246944 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.246974 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.256662 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn"] Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.312766 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k42wv\" (UniqueName: \"kubernetes.io/projected/6802f926-fcb3-4461-820e-5c0296d56e87-kube-api-access-k42wv\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.312821 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-swiftconf\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.312839 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-scripts\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.312900 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-dispersionconf\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.312934 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6802f926-fcb3-4461-820e-5c0296d56e87-etc-swift\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.312958 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-ring-data-devices\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.413356 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6802f926-fcb3-4461-820e-5c0296d56e87-etc-swift\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.413401 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-ring-data-devices\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.413442 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k42wv\" (UniqueName: \"kubernetes.io/projected/6802f926-fcb3-4461-820e-5c0296d56e87-kube-api-access-k42wv\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.413467 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-swiftconf\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.413482 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-scripts\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.413540 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-dispersionconf\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.413754 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6802f926-fcb3-4461-820e-5c0296d56e87-etc-swift\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.414303 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-ring-data-devices\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.414327 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-scripts\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.422646 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-swiftconf\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.422673 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-dispersionconf\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.436219 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k42wv\" (UniqueName: \"kubernetes.io/projected/6802f926-fcb3-4461-820e-5c0296d56e87-kube-api-access-k42wv\") pod \"swift-ring-rebalance-debug-xrnxn\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.577626 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:14 crc kubenswrapper[4810]: I0110 07:09:14.793541 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn"] Jan 10 07:09:15 crc kubenswrapper[4810]: I0110 07:09:15.703367 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="755f2dca-7808-4892-852a-ef3cfea75806" path="/var/lib/kubelet/pods/755f2dca-7808-4892-852a-ef3cfea75806/volumes" Jan 10 07:09:15 crc kubenswrapper[4810]: I0110 07:09:15.704462 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" event={"ID":"6802f926-fcb3-4461-820e-5c0296d56e87","Type":"ContainerStarted","Data":"80146de48bbad49ad51595c8fc2edb5e797e8b508e2a871d9c9526257e56939f"} Jan 10 07:09:15 crc kubenswrapper[4810]: I0110 07:09:15.704506 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" event={"ID":"6802f926-fcb3-4461-820e-5c0296d56e87","Type":"ContainerStarted","Data":"eef4095e36c70d1dc9b5ff0e52526f0dc2312626991e68af2b2f0e036498cdf6"} Jan 10 07:09:15 crc kubenswrapper[4810]: I0110 07:09:15.714391 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" podStartSLOduration=1.714373545 podStartE2EDuration="1.714373545s" podCreationTimestamp="2026-01-10 07:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:09:15.711052186 +0000 UTC m=+1384.326545089" watchObservedRunningTime="2026-01-10 07:09:15.714373545 +0000 UTC m=+1384.329866438" Jan 10 07:09:24 crc kubenswrapper[4810]: I0110 07:09:24.789706 4810 generic.go:334] "Generic (PLEG): container finished" podID="6802f926-fcb3-4461-820e-5c0296d56e87" containerID="80146de48bbad49ad51595c8fc2edb5e797e8b508e2a871d9c9526257e56939f" exitCode=0 Jan 10 07:09:24 crc kubenswrapper[4810]: I0110 07:09:24.789762 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" event={"ID":"6802f926-fcb3-4461-820e-5c0296d56e87","Type":"ContainerDied","Data":"80146de48bbad49ad51595c8fc2edb5e797e8b508e2a871d9c9526257e56939f"} Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.091540 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.133609 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn"] Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.142929 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn"] Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.202119 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-scripts\") pod \"6802f926-fcb3-4461-820e-5c0296d56e87\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.202229 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6802f926-fcb3-4461-820e-5c0296d56e87-etc-swift\") pod \"6802f926-fcb3-4461-820e-5c0296d56e87\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.202304 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-ring-data-devices\") pod \"6802f926-fcb3-4461-820e-5c0296d56e87\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.202374 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-swiftconf\") pod \"6802f926-fcb3-4461-820e-5c0296d56e87\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.202471 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-dispersionconf\") pod \"6802f926-fcb3-4461-820e-5c0296d56e87\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.202503 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k42wv\" (UniqueName: \"kubernetes.io/projected/6802f926-fcb3-4461-820e-5c0296d56e87-kube-api-access-k42wv\") pod \"6802f926-fcb3-4461-820e-5c0296d56e87\" (UID: \"6802f926-fcb3-4461-820e-5c0296d56e87\") " Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.204227 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6802f926-fcb3-4461-820e-5c0296d56e87" (UID: "6802f926-fcb3-4461-820e-5c0296d56e87"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.205409 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6802f926-fcb3-4461-820e-5c0296d56e87-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6802f926-fcb3-4461-820e-5c0296d56e87" (UID: "6802f926-fcb3-4461-820e-5c0296d56e87"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.220343 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6802f926-fcb3-4461-820e-5c0296d56e87-kube-api-access-k42wv" (OuterVolumeSpecName: "kube-api-access-k42wv") pod "6802f926-fcb3-4461-820e-5c0296d56e87" (UID: "6802f926-fcb3-4461-820e-5c0296d56e87"). InnerVolumeSpecName "kube-api-access-k42wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.229539 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-scripts" (OuterVolumeSpecName: "scripts") pod "6802f926-fcb3-4461-820e-5c0296d56e87" (UID: "6802f926-fcb3-4461-820e-5c0296d56e87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.230766 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6802f926-fcb3-4461-820e-5c0296d56e87" (UID: "6802f926-fcb3-4461-820e-5c0296d56e87"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.239942 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6802f926-fcb3-4461-820e-5c0296d56e87" (UID: "6802f926-fcb3-4461-820e-5c0296d56e87"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.304016 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6802f926-fcb3-4461-820e-5c0296d56e87-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.304065 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.304082 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.304094 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6802f926-fcb3-4461-820e-5c0296d56e87-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.304107 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k42wv\" (UniqueName: \"kubernetes.io/projected/6802f926-fcb3-4461-820e-5c0296d56e87-kube-api-access-k42wv\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.304120 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6802f926-fcb3-4461-820e-5c0296d56e87-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.813340 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef4095e36c70d1dc9b5ff0e52526f0dc2312626991e68af2b2f0e036498cdf6" Jan 10 07:09:26 crc kubenswrapper[4810]: I0110 07:09:26.813409 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-xrnxn" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.283940 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-m69vh"] Jan 10 07:09:27 crc kubenswrapper[4810]: E0110 07:09:27.284471 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6802f926-fcb3-4461-820e-5c0296d56e87" containerName="swift-ring-rebalance" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.284496 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6802f926-fcb3-4461-820e-5c0296d56e87" containerName="swift-ring-rebalance" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.284748 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="6802f926-fcb3-4461-820e-5c0296d56e87" containerName="swift-ring-rebalance" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.285528 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.287793 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.288493 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.299739 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-m69vh"] Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.425460 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-swiftconf\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.425627 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjzqb\" (UniqueName: \"kubernetes.io/projected/399f4d3a-f338-4687-aec3-ebbce735cfeb-kube-api-access-hjzqb\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.425722 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-ring-data-devices\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.425821 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-dispersionconf\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.425974 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/399f4d3a-f338-4687-aec3-ebbce735cfeb-etc-swift\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.426226 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-scripts\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.528326 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-dispersionconf\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.528451 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/399f4d3a-f338-4687-aec3-ebbce735cfeb-etc-swift\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.528556 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-scripts\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.528599 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-swiftconf\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.528651 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjzqb\" (UniqueName: \"kubernetes.io/projected/399f4d3a-f338-4687-aec3-ebbce735cfeb-kube-api-access-hjzqb\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.528701 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-ring-data-devices\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.529262 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/399f4d3a-f338-4687-aec3-ebbce735cfeb-etc-swift\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.529838 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-scripts\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.530101 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-ring-data-devices\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.532748 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-dispersionconf\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.533591 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-swiftconf\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.558824 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjzqb\" (UniqueName: \"kubernetes.io/projected/399f4d3a-f338-4687-aec3-ebbce735cfeb-kube-api-access-hjzqb\") pod \"swift-ring-rebalance-debug-m69vh\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.614874 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.701927 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6802f926-fcb3-4461-820e-5c0296d56e87" path="/var/lib/kubelet/pods/6802f926-fcb3-4461-820e-5c0296d56e87/volumes" Jan 10 07:09:27 crc kubenswrapper[4810]: I0110 07:09:27.866081 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-m69vh"] Jan 10 07:09:28 crc kubenswrapper[4810]: I0110 07:09:28.834467 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" event={"ID":"399f4d3a-f338-4687-aec3-ebbce735cfeb","Type":"ContainerStarted","Data":"4ba45d8d4b2f994b0e98a58b3ec7987b4536e9481edf7c7f047e3ece98da2eea"} Jan 10 07:09:33 crc kubenswrapper[4810]: I0110 07:09:33.877579 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" event={"ID":"399f4d3a-f338-4687-aec3-ebbce735cfeb","Type":"ContainerStarted","Data":"d75ab0b7ef65d85456d0886e29d33cbb7ec85a50e5f1c7625654dd003fa770a7"} Jan 10 07:09:36 crc kubenswrapper[4810]: I0110 07:09:36.910391 4810 generic.go:334] "Generic (PLEG): container finished" podID="399f4d3a-f338-4687-aec3-ebbce735cfeb" containerID="d75ab0b7ef65d85456d0886e29d33cbb7ec85a50e5f1c7625654dd003fa770a7" exitCode=0 Jan 10 07:09:36 crc kubenswrapper[4810]: I0110 07:09:36.910480 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" event={"ID":"399f4d3a-f338-4687-aec3-ebbce735cfeb","Type":"ContainerDied","Data":"d75ab0b7ef65d85456d0886e29d33cbb7ec85a50e5f1c7625654dd003fa770a7"} Jan 10 07:09:36 crc kubenswrapper[4810]: I0110 07:09:36.966684 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-m69vh"] Jan 10 07:09:36 crc kubenswrapper[4810]: I0110 07:09:36.984899 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-m69vh"] Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.136525 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt"] Jan 10 07:09:38 crc kubenswrapper[4810]: E0110 07:09:38.137359 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="399f4d3a-f338-4687-aec3-ebbce735cfeb" containerName="swift-ring-rebalance" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.137378 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="399f4d3a-f338-4687-aec3-ebbce735cfeb" containerName="swift-ring-rebalance" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.137576 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="399f4d3a-f338-4687-aec3-ebbce735cfeb" containerName="swift-ring-rebalance" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.138158 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.142797 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt"] Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.253700 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.311498 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7xmh\" (UniqueName: \"kubernetes.io/projected/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-kube-api-access-k7xmh\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.311552 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-etc-swift\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.311603 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-swiftconf\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.311631 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-ring-data-devices\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.311667 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-dispersionconf\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.311700 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-scripts\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.412393 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjzqb\" (UniqueName: \"kubernetes.io/projected/399f4d3a-f338-4687-aec3-ebbce735cfeb-kube-api-access-hjzqb\") pod \"399f4d3a-f338-4687-aec3-ebbce735cfeb\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.412469 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-scripts\") pod \"399f4d3a-f338-4687-aec3-ebbce735cfeb\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.412596 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-ring-data-devices\") pod \"399f4d3a-f338-4687-aec3-ebbce735cfeb\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.412641 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-dispersionconf\") pod \"399f4d3a-f338-4687-aec3-ebbce735cfeb\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.412719 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/399f4d3a-f338-4687-aec3-ebbce735cfeb-etc-swift\") pod \"399f4d3a-f338-4687-aec3-ebbce735cfeb\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.412752 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-swiftconf\") pod \"399f4d3a-f338-4687-aec3-ebbce735cfeb\" (UID: \"399f4d3a-f338-4687-aec3-ebbce735cfeb\") " Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.412988 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-dispersionconf\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.413078 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-scripts\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.413166 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7xmh\" (UniqueName: \"kubernetes.io/projected/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-kube-api-access-k7xmh\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.413256 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-etc-swift\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.413334 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-swiftconf\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.413388 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-ring-data-devices\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.414641 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-etc-swift\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.414781 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "399f4d3a-f338-4687-aec3-ebbce735cfeb" (UID: "399f4d3a-f338-4687-aec3-ebbce735cfeb"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.415733 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/399f4d3a-f338-4687-aec3-ebbce735cfeb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "399f4d3a-f338-4687-aec3-ebbce735cfeb" (UID: "399f4d3a-f338-4687-aec3-ebbce735cfeb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.415895 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-ring-data-devices\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.416972 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-scripts\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.420030 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/399f4d3a-f338-4687-aec3-ebbce735cfeb-kube-api-access-hjzqb" (OuterVolumeSpecName: "kube-api-access-hjzqb") pod "399f4d3a-f338-4687-aec3-ebbce735cfeb" (UID: "399f4d3a-f338-4687-aec3-ebbce735cfeb"). InnerVolumeSpecName "kube-api-access-hjzqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.421147 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-dispersionconf\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.421715 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-swiftconf\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.432079 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7xmh\" (UniqueName: \"kubernetes.io/projected/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-kube-api-access-k7xmh\") pod \"swift-ring-rebalance-debug-ss4zt\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.458038 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "399f4d3a-f338-4687-aec3-ebbce735cfeb" (UID: "399f4d3a-f338-4687-aec3-ebbce735cfeb"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.458090 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-scripts" (OuterVolumeSpecName: "scripts") pod "399f4d3a-f338-4687-aec3-ebbce735cfeb" (UID: "399f4d3a-f338-4687-aec3-ebbce735cfeb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.462760 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "399f4d3a-f338-4687-aec3-ebbce735cfeb" (UID: "399f4d3a-f338-4687-aec3-ebbce735cfeb"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.472340 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.514591 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.514629 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.514641 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/399f4d3a-f338-4687-aec3-ebbce735cfeb-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.514652 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/399f4d3a-f338-4687-aec3-ebbce735cfeb-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.514664 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjzqb\" (UniqueName: \"kubernetes.io/projected/399f4d3a-f338-4687-aec3-ebbce735cfeb-kube-api-access-hjzqb\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.514678 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/399f4d3a-f338-4687-aec3-ebbce735cfeb-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.935398 4810 scope.go:117] "RemoveContainer" containerID="d75ab0b7ef65d85456d0886e29d33cbb7ec85a50e5f1c7625654dd003fa770a7" Jan 10 07:09:38 crc kubenswrapper[4810]: I0110 07:09:38.935480 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-m69vh" Jan 10 07:09:39 crc kubenswrapper[4810]: I0110 07:09:39.043822 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt"] Jan 10 07:09:39 crc kubenswrapper[4810]: I0110 07:09:39.712314 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="399f4d3a-f338-4687-aec3-ebbce735cfeb" path="/var/lib/kubelet/pods/399f4d3a-f338-4687-aec3-ebbce735cfeb/volumes" Jan 10 07:09:39 crc kubenswrapper[4810]: I0110 07:09:39.970802 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" event={"ID":"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a","Type":"ContainerStarted","Data":"7908d5d58eaa4bf3162adb646a95bab2397123b4f0f776970777c824a9417f67"} Jan 10 07:09:40 crc kubenswrapper[4810]: I0110 07:09:40.983463 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" event={"ID":"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a","Type":"ContainerStarted","Data":"ffe3a3efc645f09fbb6ef60fd5995cec99e4f4f0ff116ce1b150fe674c3234ef"} Jan 10 07:09:42 crc kubenswrapper[4810]: I0110 07:09:42.019179 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" podStartSLOduration=4.01915936 podStartE2EDuration="4.01915936s" podCreationTimestamp="2026-01-10 07:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:09:42.013106146 +0000 UTC m=+1410.628599029" watchObservedRunningTime="2026-01-10 07:09:42.01915936 +0000 UTC m=+1410.634652243" Jan 10 07:09:44 crc kubenswrapper[4810]: I0110 07:09:44.015224 4810 generic.go:334] "Generic (PLEG): container finished" podID="bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a" containerID="ffe3a3efc645f09fbb6ef60fd5995cec99e4f4f0ff116ce1b150fe674c3234ef" exitCode=0 Jan 10 07:09:44 crc kubenswrapper[4810]: I0110 07:09:44.015240 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" event={"ID":"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a","Type":"ContainerDied","Data":"ffe3a3efc645f09fbb6ef60fd5995cec99e4f4f0ff116ce1b150fe674c3234ef"} Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.469545 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.503254 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt"] Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.507605 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt"] Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.521069 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-etc-swift\") pod \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.521129 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-dispersionconf\") pod \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.521165 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-scripts\") pod \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.521221 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-ring-data-devices\") pod \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.521281 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-swiftconf\") pod \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.521339 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7xmh\" (UniqueName: \"kubernetes.io/projected/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-kube-api-access-k7xmh\") pod \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\" (UID: \"bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a\") " Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.521849 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a" (UID: "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.522410 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a" (UID: "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.528864 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-kube-api-access-k7xmh" (OuterVolumeSpecName: "kube-api-access-k7xmh") pod "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a" (UID: "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a"). InnerVolumeSpecName "kube-api-access-k7xmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.551966 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-scripts" (OuterVolumeSpecName: "scripts") pod "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a" (UID: "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.555041 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a" (UID: "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.565809 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a" (UID: "bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.622162 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.622214 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.622255 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.622269 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.622280 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.622291 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7xmh\" (UniqueName: \"kubernetes.io/projected/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a-kube-api-access-k7xmh\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.634562 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635035 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-server" containerID="cri-o://c1111cc844cf46710a0ea506165eb6155bcab772504569b1c245a742a45436ed" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635096 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-server" containerID="cri-o://177712569c8e54981e296a8fc80f573500c49b4e2e7ccf627fca78e1f773aee8" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635151 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-replicator" containerID="cri-o://b6c287d40f43d63f67a964a51f7fc2e29f639796d4a3d3110277e4ad469c1c18" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635173 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-auditor" containerID="cri-o://9cab011a8fd4542f33632c7a9896800e7d8d686813617d394f6c871821f3606d" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635144 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-auditor" containerID="cri-o://59b21531ecf41d1a832ced35c76fa6db9f76208c6e3d229c62ad0c507bc75c98" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635160 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-reaper" containerID="cri-o://574db38430e62e4aab8be062c769da593c603c0e95d9fc45c3a9dd23efab3a37" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635240 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-updater" containerID="cri-o://fe930c121fe712a087a30b0efb01808b19ce1333560bec981c45b3bf124821e0" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635284 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="rsync" containerID="cri-o://68be7dd14ca951bda2841d63d9c2cf81a5121460ad1eecf2cc07d803d2973c58" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635289 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-replicator" containerID="cri-o://42b3307435cff9c1f27540d516def8fd0fbee9afd64ea83f8e3a8ff2cab5df46" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635220 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-replicator" containerID="cri-o://5ba722e037cb33f2303c9f6a818e040929134e01e825bb2252f52bcfb087ef9e" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635224 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="swift-recon-cron" containerID="cri-o://58c20fd7070fac4240a876670259ff2f0a1e879f03e7ea2997db9e3f50a020ec" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635226 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-auditor" containerID="cri-o://42b52b1f8e2388cb2ad202707e9c694223b0076895bb2e97df9176fbde8e9668" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635268 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-updater" containerID="cri-o://89d190dd57a30b5bc0c517fd2472a5b6132bf00727d87123b266ce6b5a89d12f" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635390 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-expirer" containerID="cri-o://2dd5918bcf687a07d9affd5c725339df23b478f00a41846011ae755f155f839f" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.635540 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-server" containerID="cri-o://d69ba71fd10314c5787c30859c699e56fb1b3353cd489f56afa4c8748442f43a" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.659248 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.659714 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-server" containerID="cri-o://46c2fb1d26e030ce2fcaa847f01b470f0f0dcad887a3b14873a99c0d1d15fcc9" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660087 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="swift-recon-cron" containerID="cri-o://655261d3f533e4de50c1f16a943470775f19f69516b1d0e9a38f3f14f56950a0" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660142 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="rsync" containerID="cri-o://83a92983eb8a928938627b9a680c5f1c500d74c6035efc206d01c1d69c9607d3" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660177 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-expirer" containerID="cri-o://f7125dc757f268fc55fbaebe9eb9c73691030da036764248d74fed53467dd548" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660229 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-updater" containerID="cri-o://c18c18bfba43ae0ada590014a21ca1650869e9bfa5ca9f6bf8445cc2708fbf5e" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660259 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-auditor" containerID="cri-o://86f9b6878fa02ee168b75c2756f8beadfcb6e1db40ce87d0ae728d68b22e51b8" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660288 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-replicator" containerID="cri-o://8d507a636254969000b1a7d908e3078b665671a54ca2bbf1dcc69a0ff5fe30d3" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660317 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-server" containerID="cri-o://58c88df8ef32e9de52d7ea468066e46db1bb32032ee27f37fb4d7e896f25ef9b" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660345 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-updater" containerID="cri-o://611e688e5612b670838a8251c8236d9cb03d1f9926e1a1eda2057acc7351e116" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660379 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-auditor" containerID="cri-o://b5002e16aea7e39d32f61224d814375b9e9526efc9e4ade0a3ee92dedccd44e9" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660411 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-replicator" containerID="cri-o://8e1648f94f1d40379cbd3a2dad450cb4b4e0bdea3952d813b6d45addeb42a28c" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660444 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-server" containerID="cri-o://eace1af34715086870b5562438d03905f272a5cf5775cbcfb0d1ce9ae7908ed9" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660594 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-auditor" containerID="cri-o://d0908efd399824837fa74784a6c8be146f30f6787df69e0e2eacd645349b4c3f" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660731 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-replicator" containerID="cri-o://69f5e1b55ddd8a51cdb10d86003a82dc34ddba8a939304e310aed63bd32a5a88" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.660750 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-1" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-reaper" containerID="cri-o://7974e13c559cc2b816f58026c7024f52424757b650c7c2e9ce4bc885fc426988" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.677643 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678099 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-server" containerID="cri-o://2c66cdc4afdd5a61714f8fd228aa35ee834437c05a64f32ab8269659e3053417" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678247 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-updater" containerID="cri-o://320b26dc3d937e8435175c954afaac3c639220fb5d43294acd63f95f85f8fa6a" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678235 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-server" containerID="cri-o://9078d5b74e451da392057b58de47907cc58742c708a8e1e76ce7eb99f277b3a7" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678283 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-auditor" containerID="cri-o://1da9de7dbe13392b022f17fbb43eb6c586514ce892ee55e0b435a87430e6c66c" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678317 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-replicator" containerID="cri-o://7052b79c7eae71345ad32ba5b42631d665cb221acf4b4ce24dd0f0b6a78f5327" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678400 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-expirer" containerID="cri-o://a870c8642b348c6216f052eb0d1f2a4a66fa9b94c5223d449cbfaf7af7284bdb" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678495 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-updater" containerID="cri-o://a831b4dbfe9d65625e504e2bd3674ba9902a7aca2a65a9abb397fc0e4bba632b" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678570 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-auditor" containerID="cri-o://16cdd481355f9f1be103a71f3edf4df6da41b172997ae9fc8eea38fa779d1244" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678640 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-replicator" containerID="cri-o://f6e45c8ddea0e157a1fef4618cded3737bdca9eb26c9af4e9d8083081469d20e" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678706 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="swift-recon-cron" containerID="cri-o://d480b12f49a977ac6afc39cef6e3be30b4602b0a91b4d56b72cf25e064279b5a" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678734 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="rsync" containerID="cri-o://408caa90be1a5af927636343526e8a819336837225bdd13c863ef2608b9af13c" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678759 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-auditor" containerID="cri-o://86d23f9c56770a8205ea910f34a2ca1faca100f56c90beba359935e03de5b021" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678793 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-server" containerID="cri-o://df4c479583608d3794221071118c08399d0bde969d4b58019106c8359384cca4" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678816 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-replicator" containerID="cri-o://84afb59822e91019a7cfddbd9dbd110a055508aa456c4e1cdd0db42594944483" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.678825 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-2" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-reaper" containerID="cri-o://9db8ebaaaf644355d90527d55089b19b151e881cc106c970e3ee487545b49d8c" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.685819 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-89lnm"] Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.730634 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a" path="/var/lib/kubelet/pods/bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a/volumes" Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.731140 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-89lnm"] Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.731172 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv"] Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.731364 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" podUID="7d78d54f-aba4-479e-b166-73a91e7388b6" containerName="proxy-httpd" containerID="cri-o://4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7" gracePeriod=30 Jan 10 07:09:45 crc kubenswrapper[4810]: I0110 07:09:45.731864 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" podUID="7d78d54f-aba4-479e-b166-73a91e7388b6" containerName="proxy-server" containerID="cri-o://a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714" gracePeriod=30 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044002 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="f7125dc757f268fc55fbaebe9eb9c73691030da036764248d74fed53467dd548" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044288 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="c18c18bfba43ae0ada590014a21ca1650869e9bfa5ca9f6bf8445cc2708fbf5e" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044344 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="86f9b6878fa02ee168b75c2756f8beadfcb6e1db40ce87d0ae728d68b22e51b8" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044410 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="8d507a636254969000b1a7d908e3078b665671a54ca2bbf1dcc69a0ff5fe30d3" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044469 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="611e688e5612b670838a8251c8236d9cb03d1f9926e1a1eda2057acc7351e116" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044518 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="b5002e16aea7e39d32f61224d814375b9e9526efc9e4ade0a3ee92dedccd44e9" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044572 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="8e1648f94f1d40379cbd3a2dad450cb4b4e0bdea3952d813b6d45addeb42a28c" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044621 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="eace1af34715086870b5562438d03905f272a5cf5775cbcfb0d1ce9ae7908ed9" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044675 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="7974e13c559cc2b816f58026c7024f52424757b650c7c2e9ce4bc885fc426988" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044728 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="d0908efd399824837fa74784a6c8be146f30f6787df69e0e2eacd645349b4c3f" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044782 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="69f5e1b55ddd8a51cdb10d86003a82dc34ddba8a939304e310aed63bd32a5a88" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.044064 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"f7125dc757f268fc55fbaebe9eb9c73691030da036764248d74fed53467dd548"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.045015 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"c18c18bfba43ae0ada590014a21ca1650869e9bfa5ca9f6bf8445cc2708fbf5e"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.045072 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"86f9b6878fa02ee168b75c2756f8beadfcb6e1db40ce87d0ae728d68b22e51b8"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.045131 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"8d507a636254969000b1a7d908e3078b665671a54ca2bbf1dcc69a0ff5fe30d3"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.045249 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"611e688e5612b670838a8251c8236d9cb03d1f9926e1a1eda2057acc7351e116"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.045307 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"b5002e16aea7e39d32f61224d814375b9e9526efc9e4ade0a3ee92dedccd44e9"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.045360 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"8e1648f94f1d40379cbd3a2dad450cb4b4e0bdea3952d813b6d45addeb42a28c"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.045423 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"eace1af34715086870b5562438d03905f272a5cf5775cbcfb0d1ce9ae7908ed9"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.045478 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"7974e13c559cc2b816f58026c7024f52424757b650c7c2e9ce4bc885fc426988"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.045539 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"d0908efd399824837fa74784a6c8be146f30f6787df69e0e2eacd645349b4c3f"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.045598 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"69f5e1b55ddd8a51cdb10d86003a82dc34ddba8a939304e310aed63bd32a5a88"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.047407 4810 scope.go:117] "RemoveContainer" containerID="ffe3a3efc645f09fbb6ef60fd5995cec99e4f4f0ff116ce1b150fe674c3234ef" Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.047413 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-debug-ss4zt" Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068313 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="2dd5918bcf687a07d9affd5c725339df23b478f00a41846011ae755f155f839f" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068353 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="89d190dd57a30b5bc0c517fd2472a5b6132bf00727d87123b266ce6b5a89d12f" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068362 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="59b21531ecf41d1a832ced35c76fa6db9f76208c6e3d229c62ad0c507bc75c98" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068369 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="b6c287d40f43d63f67a964a51f7fc2e29f639796d4a3d3110277e4ad469c1c18" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068378 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="fe930c121fe712a087a30b0efb01808b19ce1333560bec981c45b3bf124821e0" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068386 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="42b52b1f8e2388cb2ad202707e9c694223b0076895bb2e97df9176fbde8e9668" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068357 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"2dd5918bcf687a07d9affd5c725339df23b478f00a41846011ae755f155f839f"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068433 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"89d190dd57a30b5bc0c517fd2472a5b6132bf00727d87123b266ce6b5a89d12f"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068454 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"59b21531ecf41d1a832ced35c76fa6db9f76208c6e3d229c62ad0c507bc75c98"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068467 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"b6c287d40f43d63f67a964a51f7fc2e29f639796d4a3d3110277e4ad469c1c18"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068392 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="42b3307435cff9c1f27540d516def8fd0fbee9afd64ea83f8e3a8ff2cab5df46" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068478 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"fe930c121fe712a087a30b0efb01808b19ce1333560bec981c45b3bf124821e0"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068489 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"42b52b1f8e2388cb2ad202707e9c694223b0076895bb2e97df9176fbde8e9668"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068500 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"42b3307435cff9c1f27540d516def8fd0fbee9afd64ea83f8e3a8ff2cab5df46"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068520 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"574db38430e62e4aab8be062c769da593c603c0e95d9fc45c3a9dd23efab3a37"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068495 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="574db38430e62e4aab8be062c769da593c603c0e95d9fc45c3a9dd23efab3a37" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068541 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="9cab011a8fd4542f33632c7a9896800e7d8d686813617d394f6c871821f3606d" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068556 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="5ba722e037cb33f2303c9f6a818e040929134e01e825bb2252f52bcfb087ef9e" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068607 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"9cab011a8fd4542f33632c7a9896800e7d8d686813617d394f6c871821f3606d"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.068621 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"5ba722e037cb33f2303c9f6a818e040929134e01e825bb2252f52bcfb087ef9e"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082509 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="a870c8642b348c6216f052eb0d1f2a4a66fa9b94c5223d449cbfaf7af7284bdb" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082541 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="a831b4dbfe9d65625e504e2bd3674ba9902a7aca2a65a9abb397fc0e4bba632b" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082552 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="16cdd481355f9f1be103a71f3edf4df6da41b172997ae9fc8eea38fa779d1244" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082560 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="f6e45c8ddea0e157a1fef4618cded3737bdca9eb26c9af4e9d8083081469d20e" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082571 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="320b26dc3d937e8435175c954afaac3c639220fb5d43294acd63f95f85f8fa6a" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082579 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="1da9de7dbe13392b022f17fbb43eb6c586514ce892ee55e0b435a87430e6c66c" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082589 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="7052b79c7eae71345ad32ba5b42631d665cb221acf4b4ce24dd0f0b6a78f5327" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082597 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="9db8ebaaaf644355d90527d55089b19b151e881cc106c970e3ee487545b49d8c" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082613 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="86d23f9c56770a8205ea910f34a2ca1faca100f56c90beba359935e03de5b021" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082620 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="84afb59822e91019a7cfddbd9dbd110a055508aa456c4e1cdd0db42594944483" exitCode=0 Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082584 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"a870c8642b348c6216f052eb0d1f2a4a66fa9b94c5223d449cbfaf7af7284bdb"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082676 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"a831b4dbfe9d65625e504e2bd3674ba9902a7aca2a65a9abb397fc0e4bba632b"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082696 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"16cdd481355f9f1be103a71f3edf4df6da41b172997ae9fc8eea38fa779d1244"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082708 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"f6e45c8ddea0e157a1fef4618cded3737bdca9eb26c9af4e9d8083081469d20e"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082718 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"320b26dc3d937e8435175c954afaac3c639220fb5d43294acd63f95f85f8fa6a"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082729 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"1da9de7dbe13392b022f17fbb43eb6c586514ce892ee55e0b435a87430e6c66c"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082740 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"7052b79c7eae71345ad32ba5b42631d665cb221acf4b4ce24dd0f0b6a78f5327"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082750 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"9db8ebaaaf644355d90527d55089b19b151e881cc106c970e3ee487545b49d8c"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082760 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"86d23f9c56770a8205ea910f34a2ca1faca100f56c90beba359935e03de5b021"} Jan 10 07:09:46 crc kubenswrapper[4810]: I0110 07:09:46.082770 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"84afb59822e91019a7cfddbd9dbd110a055508aa456c4e1cdd0db42594944483"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.025048 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.152667 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-run-httpd\") pod \"7d78d54f-aba4-479e-b166-73a91e7388b6\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.152735 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-log-httpd\") pod \"7d78d54f-aba4-479e-b166-73a91e7388b6\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.152808 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift\") pod \"7d78d54f-aba4-479e-b166-73a91e7388b6\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.152882 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d78d54f-aba4-479e-b166-73a91e7388b6-config-data\") pod \"7d78d54f-aba4-479e-b166-73a91e7388b6\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.152903 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldcm9\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-kube-api-access-ldcm9\") pod \"7d78d54f-aba4-479e-b166-73a91e7388b6\" (UID: \"7d78d54f-aba4-479e-b166-73a91e7388b6\") " Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.153498 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7d78d54f-aba4-479e-b166-73a91e7388b6" (UID: "7d78d54f-aba4-479e-b166-73a91e7388b6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.153844 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7d78d54f-aba4-479e-b166-73a91e7388b6" (UID: "7d78d54f-aba4-479e-b166-73a91e7388b6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.159429 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7d78d54f-aba4-479e-b166-73a91e7388b6" (UID: "7d78d54f-aba4-479e-b166-73a91e7388b6"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.159837 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-kube-api-access-ldcm9" (OuterVolumeSpecName: "kube-api-access-ldcm9") pod "7d78d54f-aba4-479e-b166-73a91e7388b6" (UID: "7d78d54f-aba4-479e-b166-73a91e7388b6"). InnerVolumeSpecName "kube-api-access-ldcm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.164288 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="68be7dd14ca951bda2841d63d9c2cf81a5121460ad1eecf2cc07d803d2973c58" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.164321 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="177712569c8e54981e296a8fc80f573500c49b4e2e7ccf627fca78e1f773aee8" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.164330 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="d69ba71fd10314c5787c30859c699e56fb1b3353cd489f56afa4c8748442f43a" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.164338 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="c1111cc844cf46710a0ea506165eb6155bcab772504569b1c245a742a45436ed" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.164385 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"68be7dd14ca951bda2841d63d9c2cf81a5121460ad1eecf2cc07d803d2973c58"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.164415 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"177712569c8e54981e296a8fc80f573500c49b4e2e7ccf627fca78e1f773aee8"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.164428 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"d69ba71fd10314c5787c30859c699e56fb1b3353cd489f56afa4c8748442f43a"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.164439 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"c1111cc844cf46710a0ea506165eb6155bcab772504569b1c245a742a45436ed"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.170435 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="408caa90be1a5af927636343526e8a819336837225bdd13c863ef2608b9af13c" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.170462 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="9078d5b74e451da392057b58de47907cc58742c708a8e1e76ce7eb99f277b3a7" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.170471 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="df4c479583608d3794221071118c08399d0bde969d4b58019106c8359384cca4" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.170487 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="2c66cdc4afdd5a61714f8fd228aa35ee834437c05a64f32ab8269659e3053417" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.170523 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"408caa90be1a5af927636343526e8a819336837225bdd13c863ef2608b9af13c"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.170542 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"9078d5b74e451da392057b58de47907cc58742c708a8e1e76ce7eb99f277b3a7"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.170554 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"df4c479583608d3794221071118c08399d0bde969d4b58019106c8359384cca4"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.170565 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"2c66cdc4afdd5a61714f8fd228aa35ee834437c05a64f32ab8269659e3053417"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.171968 4810 generic.go:334] "Generic (PLEG): container finished" podID="7d78d54f-aba4-479e-b166-73a91e7388b6" containerID="a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.171992 4810 generic.go:334] "Generic (PLEG): container finished" podID="7d78d54f-aba4-479e-b166-73a91e7388b6" containerID="4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.172027 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" event={"ID":"7d78d54f-aba4-479e-b166-73a91e7388b6","Type":"ContainerDied","Data":"a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.172066 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" event={"ID":"7d78d54f-aba4-479e-b166-73a91e7388b6","Type":"ContainerDied","Data":"4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.172079 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" event={"ID":"7d78d54f-aba4-479e-b166-73a91e7388b6","Type":"ContainerDied","Data":"453b8be06705eb7f37ce4b106ddb12a73478f623786b1b31eedb3dc21a8c85bf"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.172097 4810 scope.go:117] "RemoveContainer" containerID="a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.172306 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.180943 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="83a92983eb8a928938627b9a680c5f1c500d74c6035efc206d01c1d69c9607d3" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.180977 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="58c88df8ef32e9de52d7ea468066e46db1bb32032ee27f37fb4d7e896f25ef9b" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.180986 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="46c2fb1d26e030ce2fcaa847f01b470f0f0dcad887a3b14873a99c0d1d15fcc9" exitCode=0 Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.181007 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"83a92983eb8a928938627b9a680c5f1c500d74c6035efc206d01c1d69c9607d3"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.181034 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"58c88df8ef32e9de52d7ea468066e46db1bb32032ee27f37fb4d7e896f25ef9b"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.181047 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"46c2fb1d26e030ce2fcaa847f01b470f0f0dcad887a3b14873a99c0d1d15fcc9"} Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.212166 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d78d54f-aba4-479e-b166-73a91e7388b6-config-data" (OuterVolumeSpecName: "config-data") pod "7d78d54f-aba4-479e-b166-73a91e7388b6" (UID: "7d78d54f-aba4-479e-b166-73a91e7388b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.214361 4810 scope.go:117] "RemoveContainer" containerID="4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.228691 4810 scope.go:117] "RemoveContainer" containerID="a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714" Jan 10 07:09:47 crc kubenswrapper[4810]: E0110 07:09:47.229122 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714\": container with ID starting with a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714 not found: ID does not exist" containerID="a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.229159 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714"} err="failed to get container status \"a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714\": rpc error: code = NotFound desc = could not find container \"a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714\": container with ID starting with a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714 not found: ID does not exist" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.229185 4810 scope.go:117] "RemoveContainer" containerID="4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7" Jan 10 07:09:47 crc kubenswrapper[4810]: E0110 07:09:47.229468 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7\": container with ID starting with 4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7 not found: ID does not exist" containerID="4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.229497 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7"} err="failed to get container status \"4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7\": rpc error: code = NotFound desc = could not find container \"4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7\": container with ID starting with 4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7 not found: ID does not exist" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.229516 4810 scope.go:117] "RemoveContainer" containerID="a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.229750 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714"} err="failed to get container status \"a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714\": rpc error: code = NotFound desc = could not find container \"a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714\": container with ID starting with a68ca3f0572e73066832d2c8ab5aa0d6189fc442f12969e164f8a410b8316714 not found: ID does not exist" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.229773 4810 scope.go:117] "RemoveContainer" containerID="4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.230039 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7"} err="failed to get container status \"4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7\": rpc error: code = NotFound desc = could not find container \"4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7\": container with ID starting with 4b2b272c0fcca5db373bcf151fc6c1ef8a5f6fcfbb03e530268ec69e16d19cd7 not found: ID does not exist" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.254345 4810 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.254402 4810 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7d78d54f-aba4-479e-b166-73a91e7388b6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.254412 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.254421 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d78d54f-aba4-479e-b166-73a91e7388b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.254431 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldcm9\" (UniqueName: \"kubernetes.io/projected/7d78d54f-aba4-479e-b166-73a91e7388b6-kube-api-access-ldcm9\") on node \"crc\" DevicePath \"\"" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.507153 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv"] Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.511673 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-67f6cc5479-jd8vv"] Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.707754 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5274bd30-f077-4ca0-98a2-d9eb9911f74c" path="/var/lib/kubelet/pods/5274bd30-f077-4ca0-98a2-d9eb9911f74c/volumes" Jan 10 07:09:47 crc kubenswrapper[4810]: I0110 07:09:47.708942 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d78d54f-aba4-479e-b166-73a91e7388b6" path="/var/lib/kubelet/pods/7d78d54f-aba4-479e-b166-73a91e7388b6/volumes" Jan 10 07:09:50 crc kubenswrapper[4810]: I0110 07:09:50.882645 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:09:50 crc kubenswrapper[4810]: I0110 07:09:50.882982 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.499418 4810 generic.go:334] "Generic (PLEG): container finished" podID="35384be4-c73c-470d-b602-c512b41bd815" containerID="58c20fd7070fac4240a876670259ff2f0a1e879f03e7ea2997db9e3f50a020ec" exitCode=137 Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.499499 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"58c20fd7070fac4240a876670259ff2f0a1e879f03e7ea2997db9e3f50a020ec"} Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.506814 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerID="d480b12f49a977ac6afc39cef6e3be30b4602b0a91b4d56b72cf25e064279b5a" exitCode=137 Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.506865 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"d480b12f49a977ac6afc39cef6e3be30b4602b0a91b4d56b72cf25e064279b5a"} Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.512100 4810 generic.go:334] "Generic (PLEG): container finished" podID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerID="655261d3f533e4de50c1f16a943470775f19f69516b1d0e9a38f3f14f56950a0" exitCode=137 Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.512133 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"655261d3f533e4de50c1f16a943470775f19f69516b1d0e9a38f3f14f56950a0"} Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.734539 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.741502 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.799262 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.907982 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-cache\") pod \"61a7bea6-d18a-4633-b837-5ef827cd7f93\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908083 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"35384be4-c73c-470d-b602-c512b41bd815\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908107 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9e445fdb-b8aa-47ba-9e59-febb527bf622\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908160 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift\") pod \"9e445fdb-b8aa-47ba-9e59-febb527bf622\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908207 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9s99\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-kube-api-access-g9s99\") pod \"35384be4-c73c-470d-b602-c512b41bd815\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908241 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift\") pod \"35384be4-c73c-470d-b602-c512b41bd815\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908275 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"61a7bea6-d18a-4633-b837-5ef827cd7f93\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908366 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnvnr\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-kube-api-access-lnvnr\") pod \"9e445fdb-b8aa-47ba-9e59-febb527bf622\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908387 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-lock\") pod \"35384be4-c73c-470d-b602-c512b41bd815\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908478 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-cache\") pod \"9e445fdb-b8aa-47ba-9e59-febb527bf622\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908495 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-lock\") pod \"9e445fdb-b8aa-47ba-9e59-febb527bf622\" (UID: \"9e445fdb-b8aa-47ba-9e59-febb527bf622\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908523 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift\") pod \"61a7bea6-d18a-4633-b837-5ef827cd7f93\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908553 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-lock\") pod \"61a7bea6-d18a-4633-b837-5ef827cd7f93\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908588 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-cache\") pod \"35384be4-c73c-470d-b602-c512b41bd815\" (UID: \"35384be4-c73c-470d-b602-c512b41bd815\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.908608 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkkv4\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-kube-api-access-dkkv4\") pod \"61a7bea6-d18a-4633-b837-5ef827cd7f93\" (UID: \"61a7bea6-d18a-4633-b837-5ef827cd7f93\") " Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.909081 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-cache" (OuterVolumeSpecName: "cache") pod "61a7bea6-d18a-4633-b837-5ef827cd7f93" (UID: "61a7bea6-d18a-4633-b837-5ef827cd7f93"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.909475 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-lock" (OuterVolumeSpecName: "lock") pod "61a7bea6-d18a-4633-b837-5ef827cd7f93" (UID: "61a7bea6-d18a-4633-b837-5ef827cd7f93"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.909546 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-cache" (OuterVolumeSpecName: "cache") pod "35384be4-c73c-470d-b602-c512b41bd815" (UID: "35384be4-c73c-470d-b602-c512b41bd815"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.909663 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-lock" (OuterVolumeSpecName: "lock") pod "9e445fdb-b8aa-47ba-9e59-febb527bf622" (UID: "9e445fdb-b8aa-47ba-9e59-febb527bf622"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.909847 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-lock" (OuterVolumeSpecName: "lock") pod "35384be4-c73c-470d-b602-c512b41bd815" (UID: "35384be4-c73c-470d-b602-c512b41bd815"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.909961 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-cache" (OuterVolumeSpecName: "cache") pod "9e445fdb-b8aa-47ba-9e59-febb527bf622" (UID: "9e445fdb-b8aa-47ba-9e59-febb527bf622"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.914165 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "swift") pod "35384be4-c73c-470d-b602-c512b41bd815" (UID: "35384be4-c73c-470d-b602-c512b41bd815"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.914316 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9e445fdb-b8aa-47ba-9e59-febb527bf622" (UID: "9e445fdb-b8aa-47ba-9e59-febb527bf622"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.914462 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-kube-api-access-lnvnr" (OuterVolumeSpecName: "kube-api-access-lnvnr") pod "9e445fdb-b8aa-47ba-9e59-febb527bf622" (UID: "9e445fdb-b8aa-47ba-9e59-febb527bf622"). InnerVolumeSpecName "kube-api-access-lnvnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.914831 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-kube-api-access-g9s99" (OuterVolumeSpecName: "kube-api-access-g9s99") pod "35384be4-c73c-470d-b602-c512b41bd815" (UID: "35384be4-c73c-470d-b602-c512b41bd815"). InnerVolumeSpecName "kube-api-access-g9s99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.915317 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "35384be4-c73c-470d-b602-c512b41bd815" (UID: "35384be4-c73c-470d-b602-c512b41bd815"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.915548 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "swift") pod "61a7bea6-d18a-4633-b837-5ef827cd7f93" (UID: "61a7bea6-d18a-4633-b837-5ef827cd7f93"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.916109 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "61a7bea6-d18a-4633-b837-5ef827cd7f93" (UID: "61a7bea6-d18a-4633-b837-5ef827cd7f93"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.916366 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-kube-api-access-dkkv4" (OuterVolumeSpecName: "kube-api-access-dkkv4") pod "61a7bea6-d18a-4633-b837-5ef827cd7f93" (UID: "61a7bea6-d18a-4633-b837-5ef827cd7f93"). InnerVolumeSpecName "kube-api-access-dkkv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:10:16 crc kubenswrapper[4810]: I0110 07:10:16.919329 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "swift") pod "9e445fdb-b8aa-47ba-9e59-febb527bf622" (UID: "9e445fdb-b8aa-47ba-9e59-febb527bf622"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010767 4810 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-cache\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010802 4810 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9e445fdb-b8aa-47ba-9e59-febb527bf622-lock\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010812 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010823 4810 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-lock\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010831 4810 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-cache\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010841 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkkv4\" (UniqueName: \"kubernetes.io/projected/61a7bea6-d18a-4633-b837-5ef827cd7f93-kube-api-access-dkkv4\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010851 4810 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/61a7bea6-d18a-4633-b837-5ef827cd7f93-cache\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010887 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010904 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010913 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010923 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9s99\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-kube-api-access-g9s99\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010933 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/35384be4-c73c-470d-b602-c512b41bd815-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010960 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010975 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnvnr\" (UniqueName: \"kubernetes.io/projected/9e445fdb-b8aa-47ba-9e59-febb527bf622-kube-api-access-lnvnr\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.010987 4810 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/35384be4-c73c-470d-b602-c512b41bd815-lock\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.027778 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.028101 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.028484 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.112066 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.112109 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.112120 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.530046 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"35384be4-c73c-470d-b602-c512b41bd815","Type":"ContainerDied","Data":"6def7094e0e85095f5bddb3a27003ee81919a5973bb107b2393149277d188daa"} Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.530126 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.530785 4810 scope.go:117] "RemoveContainer" containerID="58c20fd7070fac4240a876670259ff2f0a1e879f03e7ea2997db9e3f50a020ec" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.540969 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-2" event={"ID":"9e445fdb-b8aa-47ba-9e59-febb527bf622","Type":"ContainerDied","Data":"8100347cec1272dd3604005cb10798d290c7d1bb3eca3e0e33462080e6f2a923"} Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.544496 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-2" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.550783 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-1" event={"ID":"61a7bea6-d18a-4633-b837-5ef827cd7f93","Type":"ContainerDied","Data":"c52dccb808b6c0603000db78a7cc13b79e8b9ad8a12dd1fe8f81ba24d0938195"} Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.550947 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-1" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.570000 4810 scope.go:117] "RemoveContainer" containerID="68be7dd14ca951bda2841d63d9c2cf81a5121460ad1eecf2cc07d803d2973c58" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.579002 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.588932 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.612671 4810 scope.go:117] "RemoveContainer" containerID="2dd5918bcf687a07d9affd5c725339df23b478f00a41846011ae755f155f839f" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.616117 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.632971 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-2"] Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.637358 4810 scope.go:117] "RemoveContainer" containerID="89d190dd57a30b5bc0c517fd2472a5b6132bf00727d87123b266ce6b5a89d12f" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.640891 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.647696 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-1"] Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.652422 4810 scope.go:117] "RemoveContainer" containerID="59b21531ecf41d1a832ced35c76fa6db9f76208c6e3d229c62ad0c507bc75c98" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.670925 4810 scope.go:117] "RemoveContainer" containerID="b6c287d40f43d63f67a964a51f7fc2e29f639796d4a3d3110277e4ad469c1c18" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.686184 4810 scope.go:117] "RemoveContainer" containerID="177712569c8e54981e296a8fc80f573500c49b4e2e7ccf627fca78e1f773aee8" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.700780 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35384be4-c73c-470d-b602-c512b41bd815" path="/var/lib/kubelet/pods/35384be4-c73c-470d-b602-c512b41bd815/volumes" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.702769 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" path="/var/lib/kubelet/pods/61a7bea6-d18a-4633-b837-5ef827cd7f93/volumes" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.704484 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" path="/var/lib/kubelet/pods/9e445fdb-b8aa-47ba-9e59-febb527bf622/volumes" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.704700 4810 scope.go:117] "RemoveContainer" containerID="fe930c121fe712a087a30b0efb01808b19ce1333560bec981c45b3bf124821e0" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.723597 4810 scope.go:117] "RemoveContainer" containerID="42b52b1f8e2388cb2ad202707e9c694223b0076895bb2e97df9176fbde8e9668" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.742598 4810 scope.go:117] "RemoveContainer" containerID="42b3307435cff9c1f27540d516def8fd0fbee9afd64ea83f8e3a8ff2cab5df46" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.757774 4810 scope.go:117] "RemoveContainer" containerID="d69ba71fd10314c5787c30859c699e56fb1b3353cd489f56afa4c8748442f43a" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.780731 4810 scope.go:117] "RemoveContainer" containerID="574db38430e62e4aab8be062c769da593c603c0e95d9fc45c3a9dd23efab3a37" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.798342 4810 scope.go:117] "RemoveContainer" containerID="9cab011a8fd4542f33632c7a9896800e7d8d686813617d394f6c871821f3606d" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.813318 4810 scope.go:117] "RemoveContainer" containerID="5ba722e037cb33f2303c9f6a818e040929134e01e825bb2252f52bcfb087ef9e" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.829283 4810 scope.go:117] "RemoveContainer" containerID="c1111cc844cf46710a0ea506165eb6155bcab772504569b1c245a742a45436ed" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.854371 4810 scope.go:117] "RemoveContainer" containerID="d480b12f49a977ac6afc39cef6e3be30b4602b0a91b4d56b72cf25e064279b5a" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.875794 4810 scope.go:117] "RemoveContainer" containerID="408caa90be1a5af927636343526e8a819336837225bdd13c863ef2608b9af13c" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.891157 4810 scope.go:117] "RemoveContainer" containerID="a870c8642b348c6216f052eb0d1f2a4a66fa9b94c5223d449cbfaf7af7284bdb" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.913390 4810 scope.go:117] "RemoveContainer" containerID="a831b4dbfe9d65625e504e2bd3674ba9902a7aca2a65a9abb397fc0e4bba632b" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.931189 4810 scope.go:117] "RemoveContainer" containerID="16cdd481355f9f1be103a71f3edf4df6da41b172997ae9fc8eea38fa779d1244" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.948723 4810 scope.go:117] "RemoveContainer" containerID="f6e45c8ddea0e157a1fef4618cded3737bdca9eb26c9af4e9d8083081469d20e" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.972733 4810 scope.go:117] "RemoveContainer" containerID="9078d5b74e451da392057b58de47907cc58742c708a8e1e76ce7eb99f277b3a7" Jan 10 07:10:17 crc kubenswrapper[4810]: I0110 07:10:17.992933 4810 scope.go:117] "RemoveContainer" containerID="320b26dc3d937e8435175c954afaac3c639220fb5d43294acd63f95f85f8fa6a" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.011328 4810 scope.go:117] "RemoveContainer" containerID="1da9de7dbe13392b022f17fbb43eb6c586514ce892ee55e0b435a87430e6c66c" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.031127 4810 scope.go:117] "RemoveContainer" containerID="7052b79c7eae71345ad32ba5b42631d665cb221acf4b4ce24dd0f0b6a78f5327" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.050766 4810 scope.go:117] "RemoveContainer" containerID="df4c479583608d3794221071118c08399d0bde969d4b58019106c8359384cca4" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.082753 4810 scope.go:117] "RemoveContainer" containerID="9db8ebaaaf644355d90527d55089b19b151e881cc106c970e3ee487545b49d8c" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.106551 4810 scope.go:117] "RemoveContainer" containerID="86d23f9c56770a8205ea910f34a2ca1faca100f56c90beba359935e03de5b021" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.123349 4810 scope.go:117] "RemoveContainer" containerID="84afb59822e91019a7cfddbd9dbd110a055508aa456c4e1cdd0db42594944483" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.145225 4810 scope.go:117] "RemoveContainer" containerID="2c66cdc4afdd5a61714f8fd228aa35ee834437c05a64f32ab8269659e3053417" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.159148 4810 scope.go:117] "RemoveContainer" containerID="655261d3f533e4de50c1f16a943470775f19f69516b1d0e9a38f3f14f56950a0" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.176157 4810 scope.go:117] "RemoveContainer" containerID="83a92983eb8a928938627b9a680c5f1c500d74c6035efc206d01c1d69c9607d3" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.190358 4810 scope.go:117] "RemoveContainer" containerID="f7125dc757f268fc55fbaebe9eb9c73691030da036764248d74fed53467dd548" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.205992 4810 scope.go:117] "RemoveContainer" containerID="c18c18bfba43ae0ada590014a21ca1650869e9bfa5ca9f6bf8445cc2708fbf5e" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.222772 4810 scope.go:117] "RemoveContainer" containerID="86f9b6878fa02ee168b75c2756f8beadfcb6e1db40ce87d0ae728d68b22e51b8" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.242281 4810 scope.go:117] "RemoveContainer" containerID="8d507a636254969000b1a7d908e3078b665671a54ca2bbf1dcc69a0ff5fe30d3" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.259262 4810 scope.go:117] "RemoveContainer" containerID="58c88df8ef32e9de52d7ea468066e46db1bb32032ee27f37fb4d7e896f25ef9b" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.277353 4810 scope.go:117] "RemoveContainer" containerID="611e688e5612b670838a8251c8236d9cb03d1f9926e1a1eda2057acc7351e116" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.298541 4810 scope.go:117] "RemoveContainer" containerID="b5002e16aea7e39d32f61224d814375b9e9526efc9e4ade0a3ee92dedccd44e9" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.315886 4810 scope.go:117] "RemoveContainer" containerID="8e1648f94f1d40379cbd3a2dad450cb4b4e0bdea3952d813b6d45addeb42a28c" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.334718 4810 scope.go:117] "RemoveContainer" containerID="eace1af34715086870b5562438d03905f272a5cf5775cbcfb0d1ce9ae7908ed9" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.355763 4810 scope.go:117] "RemoveContainer" containerID="7974e13c559cc2b816f58026c7024f52424757b650c7c2e9ce4bc885fc426988" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.381579 4810 scope.go:117] "RemoveContainer" containerID="d0908efd399824837fa74784a6c8be146f30f6787df69e0e2eacd645349b4c3f" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.401678 4810 scope.go:117] "RemoveContainer" containerID="69f5e1b55ddd8a51cdb10d86003a82dc34ddba8a939304e310aed63bd32a5a88" Jan 10 07:10:18 crc kubenswrapper[4810]: I0110 07:10:18.427277 4810 scope.go:117] "RemoveContainer" containerID="46c2fb1d26e030ce2fcaa847f01b470f0f0dcad887a3b14873a99c0d1d15fcc9" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.446808 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447399 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-reaper" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447414 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-reaper" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447429 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447437 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447446 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447453 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447462 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447470 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447478 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="rsync" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447486 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="rsync" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447493 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="swift-recon-cron" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447501 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="swift-recon-cron" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447514 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447522 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-server" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447535 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447542 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447555 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447562 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447577 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447584 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447593 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447600 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447611 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447617 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-server" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447628 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447635 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447651 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447658 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447671 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-expirer" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447678 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-expirer" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447691 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447697 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447710 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-expirer" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447718 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-expirer" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447740 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-reaper" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447746 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-reaper" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447757 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-reaper" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447764 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-reaper" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447773 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447781 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447790 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447797 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447804 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447811 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447824 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d78d54f-aba4-479e-b166-73a91e7388b6" containerName="proxy-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447832 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d78d54f-aba4-479e-b166-73a91e7388b6" containerName="proxy-server" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447841 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447848 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447860 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a" containerName="swift-ring-rebalance" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447867 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a" containerName="swift-ring-rebalance" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447879 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447886 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447898 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="rsync" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447905 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="rsync" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447914 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447921 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447933 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447941 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-server" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447953 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447960 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-server" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447970 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-expirer" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447978 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-expirer" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.447987 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d78d54f-aba4-479e-b166-73a91e7388b6" containerName="proxy-httpd" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.447994 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d78d54f-aba4-479e-b166-73a91e7388b6" containerName="proxy-httpd" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448003 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448010 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448022 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="swift-recon-cron" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448029 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="swift-recon-cron" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448043 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448050 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-server" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448061 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448069 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448076 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="rsync" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448084 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="rsync" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448096 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="swift-recon-cron" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448106 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="swift-recon-cron" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448119 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448126 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448138 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448146 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-server" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448157 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448165 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-server" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448178 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448210 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448224 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448233 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448242 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448249 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-server" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448260 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448268 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-server" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448277 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448284 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448294 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448301 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.448312 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448319 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448469 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="rsync" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448485 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448498 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-reaper" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448511 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448524 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448534 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="rsync" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448546 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448558 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448568 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448576 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d78d54f-aba4-479e-b166-73a91e7388b6" containerName="proxy-httpd" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448587 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448598 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448605 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="swift-recon-cron" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448613 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="swift-recon-cron" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448622 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-expirer" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448634 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448645 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448657 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448666 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d78d54f-aba4-479e-b166-73a91e7388b6" containerName="proxy-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448675 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448684 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448694 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-expirer" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448707 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448738 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448746 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448756 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="swift-recon-cron" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448767 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448776 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448783 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="container-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448793 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448802 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448812 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-expirer" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448824 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448833 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf90b3c1-32cd-45c3-9ca0-bc85c9b43b1a" containerName="swift-ring-rebalance" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448843 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="account-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448853 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="account-reaper" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448864 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448875 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448888 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448897 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="container-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448908 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e445fdb-b8aa-47ba-9e59-febb527bf622" containerName="object-server" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448915 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="rsync" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448925 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448936 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="object-updater" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448944 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-replicator" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448956 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7bea6-d18a-4633-b837-5ef827cd7f93" containerName="account-reaper" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448965 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="container-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.448977 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="35384be4-c73c-470d-b602-c512b41bd815" containerName="object-auditor" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.453565 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.455176 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.455282 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.455382 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.455785 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-77vm9" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.476442 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.561923 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-cache\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.561997 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfswb\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-kube-api-access-nfswb\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.562030 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.562088 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.562111 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-lock\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.663458 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfswb\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-kube-api-access-nfswb\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.663508 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.663560 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.663579 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-lock\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.663734 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.663764 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:10:20 crc kubenswrapper[4810]: E0110 07:10:20.664033 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift podName:1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1 nodeName:}" failed. No retries permitted until 2026-01-10 07:10:21.163800078 +0000 UTC m=+1449.779292961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift") pod "swift-storage-0" (UID: "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1") : configmap "swift-ring-files" not found Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.664167 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") device mount path \"/mnt/openstack/pv08\"" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.663636 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-cache\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.664463 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-lock\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.664530 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-cache\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.683845 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.685755 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfswb\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-kube-api-access-nfswb\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.818915 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-q6t5s"] Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.819748 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.823018 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.823794 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.823850 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.829250 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-q6t5s"] Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.882561 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.882632 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.968235 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-etc-swift\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.968305 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-ring-data-devices\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.968331 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-dispersionconf\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.968361 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-scripts\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.968390 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-swiftconf\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:20 crc kubenswrapper[4810]: I0110 07:10:20.968464 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdrs7\" (UniqueName: \"kubernetes.io/projected/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-kube-api-access-fdrs7\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.069171 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-swiftconf\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.069293 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdrs7\" (UniqueName: \"kubernetes.io/projected/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-kube-api-access-fdrs7\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.069320 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-etc-swift\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.069358 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-ring-data-devices\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.069379 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-dispersionconf\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.069408 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-scripts\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.069846 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-etc-swift\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.070102 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-ring-data-devices\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.070497 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-scripts\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.074013 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-dispersionconf\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.080363 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-swiftconf\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.091808 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdrs7\" (UniqueName: \"kubernetes.io/projected/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-kube-api-access-fdrs7\") pod \"swift-ring-rebalance-q6t5s\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.135393 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.171214 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:21 crc kubenswrapper[4810]: E0110 07:10:21.171470 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:10:21 crc kubenswrapper[4810]: E0110 07:10:21.171503 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:10:21 crc kubenswrapper[4810]: E0110 07:10:21.171564 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift podName:1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1 nodeName:}" failed. No retries permitted until 2026-01-10 07:10:22.171546495 +0000 UTC m=+1450.787039378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift") pod "swift-storage-0" (UID: "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1") : configmap "swift-ring-files" not found Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.244021 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-6bb4649ff-699nl"] Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.245275 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.260003 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-6bb4649ff-699nl"] Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.374475 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-run-httpd\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.374714 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f138ae-b546-44c2-aca7-b1f9efabf15f-config-data\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.374928 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmsmb\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-kube-api-access-bmsmb\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.375033 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-log-httpd\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.375081 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.476495 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmsmb\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-kube-api-access-bmsmb\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.476584 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-log-httpd\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.476634 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.476663 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-run-httpd\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.476722 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f138ae-b546-44c2-aca7-b1f9efabf15f-config-data\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: E0110 07:10:21.478414 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:10:21 crc kubenswrapper[4810]: E0110 07:10:21.478436 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-6bb4649ff-699nl: configmap "swift-ring-files" not found Jan 10 07:10:21 crc kubenswrapper[4810]: E0110 07:10:21.478488 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift podName:54f138ae-b546-44c2-aca7-b1f9efabf15f nodeName:}" failed. No retries permitted until 2026-01-10 07:10:21.978471191 +0000 UTC m=+1450.593964084 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift") pod "swift-proxy-6bb4649ff-699nl" (UID: "54f138ae-b546-44c2-aca7-b1f9efabf15f") : configmap "swift-ring-files" not found Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.478813 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-run-httpd\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.478842 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-log-httpd\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.485373 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f138ae-b546-44c2-aca7-b1f9efabf15f-config-data\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.500691 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmsmb\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-kube-api-access-bmsmb\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.578056 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-q6t5s"] Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.594861 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" event={"ID":"b1d19d63-87c8-4b90-9c8f-19de9a4c024d","Type":"ContainerStarted","Data":"e3160f6476817d712a7c4178a65ccde4d49babe74b528137192e9dde0c376e00"} Jan 10 07:10:21 crc kubenswrapper[4810]: I0110 07:10:21.985316 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:21 crc kubenswrapper[4810]: E0110 07:10:21.985436 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:10:21 crc kubenswrapper[4810]: E0110 07:10:21.996506 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-6bb4649ff-699nl: configmap "swift-ring-files" not found Jan 10 07:10:21 crc kubenswrapper[4810]: E0110 07:10:21.996595 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift podName:54f138ae-b546-44c2-aca7-b1f9efabf15f nodeName:}" failed. No retries permitted until 2026-01-10 07:10:22.996570944 +0000 UTC m=+1451.612063847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift") pod "swift-proxy-6bb4649ff-699nl" (UID: "54f138ae-b546-44c2-aca7-b1f9efabf15f") : configmap "swift-ring-files" not found Jan 10 07:10:22 crc kubenswrapper[4810]: I0110 07:10:22.199387 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:22 crc kubenswrapper[4810]: E0110 07:10:22.199608 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:10:22 crc kubenswrapper[4810]: E0110 07:10:22.199621 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:10:22 crc kubenswrapper[4810]: E0110 07:10:22.199665 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift podName:1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1 nodeName:}" failed. No retries permitted until 2026-01-10 07:10:24.199648579 +0000 UTC m=+1452.815141462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift") pod "swift-storage-0" (UID: "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1") : configmap "swift-ring-files" not found Jan 10 07:10:22 crc kubenswrapper[4810]: I0110 07:10:22.601855 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" event={"ID":"b1d19d63-87c8-4b90-9c8f-19de9a4c024d","Type":"ContainerStarted","Data":"c1c305b4cf7c2818b08e61f5f2903f59a29c52d26ff91b35319410d712474fad"} Jan 10 07:10:22 crc kubenswrapper[4810]: I0110 07:10:22.620804 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" podStartSLOduration=2.620789765 podStartE2EDuration="2.620789765s" podCreationTimestamp="2026-01-10 07:10:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:10:22.618901629 +0000 UTC m=+1451.234394522" watchObservedRunningTime="2026-01-10 07:10:22.620789765 +0000 UTC m=+1451.236282648" Jan 10 07:10:23 crc kubenswrapper[4810]: I0110 07:10:23.009089 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:23 crc kubenswrapper[4810]: E0110 07:10:23.009784 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:10:23 crc kubenswrapper[4810]: E0110 07:10:23.009816 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-6bb4649ff-699nl: configmap "swift-ring-files" not found Jan 10 07:10:23 crc kubenswrapper[4810]: E0110 07:10:23.009885 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift podName:54f138ae-b546-44c2-aca7-b1f9efabf15f nodeName:}" failed. No retries permitted until 2026-01-10 07:10:25.009859875 +0000 UTC m=+1453.625352798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift") pod "swift-proxy-6bb4649ff-699nl" (UID: "54f138ae-b546-44c2-aca7-b1f9efabf15f") : configmap "swift-ring-files" not found Jan 10 07:10:24 crc kubenswrapper[4810]: I0110 07:10:24.230490 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:24 crc kubenswrapper[4810]: E0110 07:10:24.230718 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:10:24 crc kubenswrapper[4810]: E0110 07:10:24.230732 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:10:24 crc kubenswrapper[4810]: E0110 07:10:24.230772 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift podName:1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1 nodeName:}" failed. No retries permitted until 2026-01-10 07:10:28.230758997 +0000 UTC m=+1456.846251880 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift") pod "swift-storage-0" (UID: "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1") : configmap "swift-ring-files" not found Jan 10 07:10:25 crc kubenswrapper[4810]: I0110 07:10:25.051259 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:25 crc kubenswrapper[4810]: E0110 07:10:25.051508 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:10:25 crc kubenswrapper[4810]: E0110 07:10:25.051569 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-6bb4649ff-699nl: configmap "swift-ring-files" not found Jan 10 07:10:25 crc kubenswrapper[4810]: E0110 07:10:25.051691 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift podName:54f138ae-b546-44c2-aca7-b1f9efabf15f nodeName:}" failed. No retries permitted until 2026-01-10 07:10:29.051655807 +0000 UTC m=+1457.667148720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift") pod "swift-proxy-6bb4649ff-699nl" (UID: "54f138ae-b546-44c2-aca7-b1f9efabf15f") : configmap "swift-ring-files" not found Jan 10 07:10:28 crc kubenswrapper[4810]: I0110 07:10:28.301001 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:28 crc kubenswrapper[4810]: E0110 07:10:28.301269 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:10:28 crc kubenswrapper[4810]: E0110 07:10:28.301525 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:10:28 crc kubenswrapper[4810]: E0110 07:10:28.301641 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift podName:1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1 nodeName:}" failed. No retries permitted until 2026-01-10 07:10:36.301607428 +0000 UTC m=+1464.917100351 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift") pod "swift-storage-0" (UID: "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1") : configmap "swift-ring-files" not found Jan 10 07:10:29 crc kubenswrapper[4810]: I0110 07:10:29.111688 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:29 crc kubenswrapper[4810]: E0110 07:10:29.111871 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:10:29 crc kubenswrapper[4810]: E0110 07:10:29.112484 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-6bb4649ff-699nl: configmap "swift-ring-files" not found Jan 10 07:10:29 crc kubenswrapper[4810]: E0110 07:10:29.112568 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift podName:54f138ae-b546-44c2-aca7-b1f9efabf15f nodeName:}" failed. No retries permitted until 2026-01-10 07:10:37.112544211 +0000 UTC m=+1465.728037124 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift") pod "swift-proxy-6bb4649ff-699nl" (UID: "54f138ae-b546-44c2-aca7-b1f9efabf15f") : configmap "swift-ring-files" not found Jan 10 07:10:32 crc kubenswrapper[4810]: I0110 07:10:32.700407 4810 generic.go:334] "Generic (PLEG): container finished" podID="b1d19d63-87c8-4b90-9c8f-19de9a4c024d" containerID="c1c305b4cf7c2818b08e61f5f2903f59a29c52d26ff91b35319410d712474fad" exitCode=0 Jan 10 07:10:32 crc kubenswrapper[4810]: I0110 07:10:32.700475 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" event={"ID":"b1d19d63-87c8-4b90-9c8f-19de9a4c024d","Type":"ContainerDied","Data":"c1c305b4cf7c2818b08e61f5f2903f59a29c52d26ff91b35319410d712474fad"} Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.016506 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.188097 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-swiftconf\") pod \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.188420 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-dispersionconf\") pod \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.188469 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-etc-swift\") pod \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.188587 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-scripts\") pod \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.188624 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdrs7\" (UniqueName: \"kubernetes.io/projected/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-kube-api-access-fdrs7\") pod \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.188657 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-ring-data-devices\") pod \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.189476 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b1d19d63-87c8-4b90-9c8f-19de9a4c024d" (UID: "b1d19d63-87c8-4b90-9c8f-19de9a4c024d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.189858 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b1d19d63-87c8-4b90-9c8f-19de9a4c024d" (UID: "b1d19d63-87c8-4b90-9c8f-19de9a4c024d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.194168 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-kube-api-access-fdrs7" (OuterVolumeSpecName: "kube-api-access-fdrs7") pod "b1d19d63-87c8-4b90-9c8f-19de9a4c024d" (UID: "b1d19d63-87c8-4b90-9c8f-19de9a4c024d"). InnerVolumeSpecName "kube-api-access-fdrs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.221726 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-scripts" (OuterVolumeSpecName: "scripts") pod "b1d19d63-87c8-4b90-9c8f-19de9a4c024d" (UID: "b1d19d63-87c8-4b90-9c8f-19de9a4c024d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:10:34 crc kubenswrapper[4810]: E0110 07:10:34.221897 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-swiftconf podName:b1d19d63-87c8-4b90-9c8f-19de9a4c024d nodeName:}" failed. No retries permitted until 2026-01-10 07:10:34.721866345 +0000 UTC m=+1463.337359248 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "swiftconf" (UniqueName: "kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-swiftconf") pod "b1d19d63-87c8-4b90-9c8f-19de9a4c024d" (UID: "b1d19d63-87c8-4b90-9c8f-19de9a4c024d") : error deleting /var/lib/kubelet/pods/b1d19d63-87c8-4b90-9c8f-19de9a4c024d/volume-subpaths: remove /var/lib/kubelet/pods/b1d19d63-87c8-4b90-9c8f-19de9a4c024d/volume-subpaths: no such file or directory Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.225839 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b1d19d63-87c8-4b90-9c8f-19de9a4c024d" (UID: "b1d19d63-87c8-4b90-9c8f-19de9a4c024d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.291468 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.291501 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.291514 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.291524 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.291533 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdrs7\" (UniqueName: \"kubernetes.io/projected/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-kube-api-access-fdrs7\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.723953 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" event={"ID":"b1d19d63-87c8-4b90-9c8f-19de9a4c024d","Type":"ContainerDied","Data":"e3160f6476817d712a7c4178a65ccde4d49babe74b528137192e9dde0c376e00"} Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.724006 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3160f6476817d712a7c4178a65ccde4d49babe74b528137192e9dde0c376e00" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.724091 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-q6t5s" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.798463 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-swiftconf\") pod \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\" (UID: \"b1d19d63-87c8-4b90-9c8f-19de9a4c024d\") " Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.801113 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b1d19d63-87c8-4b90-9c8f-19de9a4c024d" (UID: "b1d19d63-87c8-4b90-9c8f-19de9a4c024d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:10:34 crc kubenswrapper[4810]: I0110 07:10:34.899967 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b1d19d63-87c8-4b90-9c8f-19de9a4c024d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:36 crc kubenswrapper[4810]: I0110 07:10:36.323138 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:36 crc kubenswrapper[4810]: I0110 07:10:36.334013 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift\") pod \"swift-storage-0\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:36 crc kubenswrapper[4810]: I0110 07:10:36.371525 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:10:36 crc kubenswrapper[4810]: I0110 07:10:36.890705 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:10:37 crc kubenswrapper[4810]: I0110 07:10:37.140173 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:37 crc kubenswrapper[4810]: I0110 07:10:37.147573 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift\") pod \"swift-proxy-6bb4649ff-699nl\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:37 crc kubenswrapper[4810]: I0110 07:10:37.174159 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:37 crc kubenswrapper[4810]: I0110 07:10:37.645131 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-6bb4649ff-699nl"] Jan 10 07:10:37 crc kubenswrapper[4810]: W0110 07:10:37.655470 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54f138ae_b546_44c2_aca7_b1f9efabf15f.slice/crio-4e39d5c0e1917d9d888d51c3e518d55a20e027a765fa5b7df7654f92cf825485 WatchSource:0}: Error finding container 4e39d5c0e1917d9d888d51c3e518d55a20e027a765fa5b7df7654f92cf825485: Status 404 returned error can't find the container with id 4e39d5c0e1917d9d888d51c3e518d55a20e027a765fa5b7df7654f92cf825485 Jan 10 07:10:37 crc kubenswrapper[4810]: I0110 07:10:37.754949 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" event={"ID":"54f138ae-b546-44c2-aca7-b1f9efabf15f","Type":"ContainerStarted","Data":"4e39d5c0e1917d9d888d51c3e518d55a20e027a765fa5b7df7654f92cf825485"} Jan 10 07:10:37 crc kubenswrapper[4810]: I0110 07:10:37.758273 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679"} Jan 10 07:10:37 crc kubenswrapper[4810]: I0110 07:10:37.758335 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"61323b3cefbf9caa503a660138d57ab7e24603773d9bc2d3df8913114b7333e6"} Jan 10 07:10:38 crc kubenswrapper[4810]: I0110 07:10:38.768124 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" event={"ID":"54f138ae-b546-44c2-aca7-b1f9efabf15f","Type":"ContainerStarted","Data":"6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c"} Jan 10 07:10:38 crc kubenswrapper[4810]: I0110 07:10:38.768470 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:38 crc kubenswrapper[4810]: I0110 07:10:38.768484 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:38 crc kubenswrapper[4810]: I0110 07:10:38.768493 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" event={"ID":"54f138ae-b546-44c2-aca7-b1f9efabf15f","Type":"ContainerStarted","Data":"5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b"} Jan 10 07:10:38 crc kubenswrapper[4810]: I0110 07:10:38.772247 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c"} Jan 10 07:10:38 crc kubenswrapper[4810]: I0110 07:10:38.772284 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701"} Jan 10 07:10:38 crc kubenswrapper[4810]: I0110 07:10:38.787960 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" podStartSLOduration=17.787942814 podStartE2EDuration="17.787942814s" podCreationTimestamp="2026-01-10 07:10:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:10:38.78694049 +0000 UTC m=+1467.402433373" watchObservedRunningTime="2026-01-10 07:10:38.787942814 +0000 UTC m=+1467.403435697" Jan 10 07:10:39 crc kubenswrapper[4810]: I0110 07:10:39.787518 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28"} Jan 10 07:10:39 crc kubenswrapper[4810]: I0110 07:10:39.787861 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169"} Jan 10 07:10:39 crc kubenswrapper[4810]: I0110 07:10:39.787877 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32"} Jan 10 07:10:39 crc kubenswrapper[4810]: I0110 07:10:39.787889 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d"} Jan 10 07:10:39 crc kubenswrapper[4810]: I0110 07:10:39.787900 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3"} Jan 10 07:10:39 crc kubenswrapper[4810]: I0110 07:10:39.787909 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e"} Jan 10 07:10:40 crc kubenswrapper[4810]: I0110 07:10:40.800688 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217"} Jan 10 07:10:40 crc kubenswrapper[4810]: I0110 07:10:40.800733 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419"} Jan 10 07:10:40 crc kubenswrapper[4810]: I0110 07:10:40.800744 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892"} Jan 10 07:10:40 crc kubenswrapper[4810]: I0110 07:10:40.800752 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1"} Jan 10 07:10:40 crc kubenswrapper[4810]: I0110 07:10:40.800760 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242"} Jan 10 07:10:41 crc kubenswrapper[4810]: I0110 07:10:41.814353 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerStarted","Data":"120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4"} Jan 10 07:10:42 crc kubenswrapper[4810]: I0110 07:10:42.863794 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=23.863767996 podStartE2EDuration="23.863767996s" podCreationTimestamp="2026-01-10 07:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:10:42.854439643 +0000 UTC m=+1471.469932526" watchObservedRunningTime="2026-01-10 07:10:42.863767996 +0000 UTC m=+1471.479260889" Jan 10 07:10:47 crc kubenswrapper[4810]: I0110 07:10:47.176898 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:47 crc kubenswrapper[4810]: I0110 07:10:47.178504 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.939245 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-q6t5s"] Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.951821 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-q6t5s"] Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.962348 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.962930 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-server" containerID="cri-o://db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963071 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-updater" containerID="cri-o://5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963142 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-auditor" containerID="cri-o://f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963134 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-reaper" containerID="cri-o://5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963186 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-replicator" containerID="cri-o://4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963273 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-expirer" containerID="cri-o://4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963293 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-auditor" containerID="cri-o://5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963348 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-server" containerID="cri-o://dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963394 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-auditor" containerID="cri-o://a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963444 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-updater" containerID="cri-o://561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963504 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="swift-recon-cron" containerID="cri-o://120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963279 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-replicator" containerID="cri-o://e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963622 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-replicator" containerID="cri-o://334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.963488 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="rsync" containerID="cri-o://7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217" gracePeriod=30 Jan 10 07:10:48 crc kubenswrapper[4810]: I0110 07:10:48.965131 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-server" containerID="cri-o://ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28" gracePeriod=30 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.007580 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-6bb4649ff-699nl"] Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.007783 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" podUID="54f138ae-b546-44c2-aca7-b1f9efabf15f" containerName="proxy-httpd" containerID="cri-o://5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b" gracePeriod=30 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.008141 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" podUID="54f138ae-b546-44c2-aca7-b1f9efabf15f" containerName="proxy-server" containerID="cri-o://6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c" gracePeriod=30 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.701142 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d19d63-87c8-4b90-9c8f-19de9a4c024d" path="/var/lib/kubelet/pods/b1d19d63-87c8-4b90-9c8f-19de9a4c024d/volumes" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.858571 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.886011 4810 generic.go:334] "Generic (PLEG): container finished" podID="54f138ae-b546-44c2-aca7-b1f9efabf15f" containerID="6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.886050 4810 generic.go:334] "Generic (PLEG): container finished" podID="54f138ae-b546-44c2-aca7-b1f9efabf15f" containerID="5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.886092 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" event={"ID":"54f138ae-b546-44c2-aca7-b1f9efabf15f","Type":"ContainerDied","Data":"6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.886121 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" event={"ID":"54f138ae-b546-44c2-aca7-b1f9efabf15f","Type":"ContainerDied","Data":"5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.886134 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" event={"ID":"54f138ae-b546-44c2-aca7-b1f9efabf15f","Type":"ContainerDied","Data":"4e39d5c0e1917d9d888d51c3e518d55a20e027a765fa5b7df7654f92cf825485"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.886153 4810 scope.go:117] "RemoveContainer" containerID="6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.886155 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-6bb4649ff-699nl" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894363 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894401 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894431 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894441 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894449 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894456 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894464 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894482 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894560 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894587 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894597 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894605 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894613 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894623 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679" exitCode=0 Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894666 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894695 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894708 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894721 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894749 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894759 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894768 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894776 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894785 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894794 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894802 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894830 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894838 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.894847 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679"} Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.909436 4810 scope.go:117] "RemoveContainer" containerID="5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.924148 4810 scope.go:117] "RemoveContainer" containerID="6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c" Jan 10 07:10:49 crc kubenswrapper[4810]: E0110 07:10:49.924614 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c\": container with ID starting with 6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c not found: ID does not exist" containerID="6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.924646 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c"} err="failed to get container status \"6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c\": rpc error: code = NotFound desc = could not find container \"6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c\": container with ID starting with 6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c not found: ID does not exist" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.924685 4810 scope.go:117] "RemoveContainer" containerID="5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b" Jan 10 07:10:49 crc kubenswrapper[4810]: E0110 07:10:49.926827 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b\": container with ID starting with 5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b not found: ID does not exist" containerID="5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.926895 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b"} err="failed to get container status \"5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b\": rpc error: code = NotFound desc = could not find container \"5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b\": container with ID starting with 5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b not found: ID does not exist" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.926934 4810 scope.go:117] "RemoveContainer" containerID="6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.927332 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c"} err="failed to get container status \"6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c\": rpc error: code = NotFound desc = could not find container \"6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c\": container with ID starting with 6a945f1d00c3e269b5b855f81be68378cd84381bad6556f9a5bf40f9019ae34c not found: ID does not exist" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.927355 4810 scope.go:117] "RemoveContainer" containerID="5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.927815 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b"} err="failed to get container status \"5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b\": rpc error: code = NotFound desc = could not find container \"5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b\": container with ID starting with 5a1607fa42085386cb7e5fe4e08330e99a90b91ffb463aaa50e5e2aa3e38134b not found: ID does not exist" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.927978 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift\") pod \"54f138ae-b546-44c2-aca7-b1f9efabf15f\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.928060 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-log-httpd\") pod \"54f138ae-b546-44c2-aca7-b1f9efabf15f\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.928098 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f138ae-b546-44c2-aca7-b1f9efabf15f-config-data\") pod \"54f138ae-b546-44c2-aca7-b1f9efabf15f\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.928168 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-run-httpd\") pod \"54f138ae-b546-44c2-aca7-b1f9efabf15f\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.928239 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmsmb\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-kube-api-access-bmsmb\") pod \"54f138ae-b546-44c2-aca7-b1f9efabf15f\" (UID: \"54f138ae-b546-44c2-aca7-b1f9efabf15f\") " Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.928590 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "54f138ae-b546-44c2-aca7-b1f9efabf15f" (UID: "54f138ae-b546-44c2-aca7-b1f9efabf15f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.928778 4810 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.928888 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "54f138ae-b546-44c2-aca7-b1f9efabf15f" (UID: "54f138ae-b546-44c2-aca7-b1f9efabf15f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.935109 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "54f138ae-b546-44c2-aca7-b1f9efabf15f" (UID: "54f138ae-b546-44c2-aca7-b1f9efabf15f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.935157 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-kube-api-access-bmsmb" (OuterVolumeSpecName: "kube-api-access-bmsmb") pod "54f138ae-b546-44c2-aca7-b1f9efabf15f" (UID: "54f138ae-b546-44c2-aca7-b1f9efabf15f"). InnerVolumeSpecName "kube-api-access-bmsmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:10:49 crc kubenswrapper[4810]: I0110 07:10:49.970596 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f138ae-b546-44c2-aca7-b1f9efabf15f-config-data" (OuterVolumeSpecName: "config-data") pod "54f138ae-b546-44c2-aca7-b1f9efabf15f" (UID: "54f138ae-b546-44c2-aca7-b1f9efabf15f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:10:50 crc kubenswrapper[4810]: I0110 07:10:50.030178 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:50 crc kubenswrapper[4810]: I0110 07:10:50.030538 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f138ae-b546-44c2-aca7-b1f9efabf15f-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:50 crc kubenswrapper[4810]: I0110 07:10:50.030555 4810 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54f138ae-b546-44c2-aca7-b1f9efabf15f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:50 crc kubenswrapper[4810]: I0110 07:10:50.030568 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmsmb\" (UniqueName: \"kubernetes.io/projected/54f138ae-b546-44c2-aca7-b1f9efabf15f-kube-api-access-bmsmb\") on node \"crc\" DevicePath \"\"" Jan 10 07:10:50 crc kubenswrapper[4810]: I0110 07:10:50.213528 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-6bb4649ff-699nl"] Jan 10 07:10:50 crc kubenswrapper[4810]: I0110 07:10:50.219638 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-6bb4649ff-699nl"] Jan 10 07:10:50 crc kubenswrapper[4810]: I0110 07:10:50.882776 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:10:50 crc kubenswrapper[4810]: I0110 07:10:50.882844 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:10:50 crc kubenswrapper[4810]: I0110 07:10:50.882900 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 07:10:50 crc kubenswrapper[4810]: I0110 07:10:50.883729 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1621bdd26d8919cc52b7877444181bced5c9989faf9461e3b82b9a71cd444d4"} pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 10 07:10:50 crc kubenswrapper[4810]: I0110 07:10:50.883819 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" containerID="cri-o://a1621bdd26d8919cc52b7877444181bced5c9989faf9461e3b82b9a71cd444d4" gracePeriod=600 Jan 10 07:10:51 crc kubenswrapper[4810]: I0110 07:10:51.702157 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f138ae-b546-44c2-aca7-b1f9efabf15f" path="/var/lib/kubelet/pods/54f138ae-b546-44c2-aca7-b1f9efabf15f/volumes" Jan 10 07:10:51 crc kubenswrapper[4810]: I0110 07:10:51.916544 4810 generic.go:334] "Generic (PLEG): container finished" podID="b5b79429-9259-412f-bab8-27865ab7029b" containerID="a1621bdd26d8919cc52b7877444181bced5c9989faf9461e3b82b9a71cd444d4" exitCode=0 Jan 10 07:10:51 crc kubenswrapper[4810]: I0110 07:10:51.916614 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerDied","Data":"a1621bdd26d8919cc52b7877444181bced5c9989faf9461e3b82b9a71cd444d4"} Jan 10 07:10:51 crc kubenswrapper[4810]: I0110 07:10:51.916862 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910"} Jan 10 07:10:51 crc kubenswrapper[4810]: I0110 07:10:51.916880 4810 scope.go:117] "RemoveContainer" containerID="ef69a96aed65e9ebe56b671768834cb4b9361c0d3969bd6b62fbd2f9b5387011" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.016756 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.095485 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-lock\") pod \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.095583 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfswb\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-kube-api-access-nfswb\") pod \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.095647 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.095807 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift\") pod \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.095870 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-cache\") pod \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\" (UID: \"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1\") " Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.096624 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-cache" (OuterVolumeSpecName: "cache") pod "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" (UID: "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.096988 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-lock" (OuterVolumeSpecName: "lock") pod "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" (UID: "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.102738 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" (UID: "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.103567 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-kube-api-access-nfswb" (OuterVolumeSpecName: "kube-api-access-nfswb") pod "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" (UID: "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1"). InnerVolumeSpecName "kube-api-access-nfswb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.104219 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "swift") pod "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" (UID: "1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.155734 4810 generic.go:334] "Generic (PLEG): container finished" podID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerID="120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4" exitCode=137 Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.155781 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4"} Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.155809 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1","Type":"ContainerDied","Data":"61323b3cefbf9caa503a660138d57ab7e24603773d9bc2d3df8913114b7333e6"} Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.155828 4810 scope.go:117] "RemoveContainer" containerID="120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.155831 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.179282 4810 scope.go:117] "RemoveContainer" containerID="7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.197476 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.197544 4810 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-cache\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.197561 4810 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-lock\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.197574 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfswb\" (UniqueName: \"kubernetes.io/projected/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1-kube-api-access-nfswb\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.197643 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.198768 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.203968 4810 scope.go:117] "RemoveContainer" containerID="4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.205015 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.209028 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.223716 4810 scope.go:117] "RemoveContainer" containerID="561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.242315 4810 scope.go:117] "RemoveContainer" containerID="a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.263384 4810 scope.go:117] "RemoveContainer" containerID="334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.277893 4810 scope.go:117] "RemoveContainer" containerID="ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.307606 4810 scope.go:117] "RemoveContainer" containerID="5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.308433 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.329215 4810 scope.go:117] "RemoveContainer" containerID="5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.344433 4810 scope.go:117] "RemoveContainer" containerID="e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.361963 4810 scope.go:117] "RemoveContainer" containerID="dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.381393 4810 scope.go:117] "RemoveContainer" containerID="5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.404801 4810 scope.go:117] "RemoveContainer" containerID="f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.420480 4810 scope.go:117] "RemoveContainer" containerID="4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.435390 4810 scope.go:117] "RemoveContainer" containerID="db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.454741 4810 scope.go:117] "RemoveContainer" containerID="120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.455284 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4\": container with ID starting with 120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4 not found: ID does not exist" containerID="120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.455337 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4"} err="failed to get container status \"120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4\": rpc error: code = NotFound desc = could not find container \"120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4\": container with ID starting with 120e8dc921efeede5b8a0b18b58165dc52818c501e276126505583cb530ee8f4 not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.455369 4810 scope.go:117] "RemoveContainer" containerID="7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.455866 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217\": container with ID starting with 7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217 not found: ID does not exist" containerID="7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.455918 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217"} err="failed to get container status \"7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217\": rpc error: code = NotFound desc = could not find container \"7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217\": container with ID starting with 7b39620d558ae9b8c90805f48bde7b412079fc41303ca92bff92671e43b5b217 not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.455956 4810 scope.go:117] "RemoveContainer" containerID="4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.456383 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419\": container with ID starting with 4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419 not found: ID does not exist" containerID="4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.456412 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419"} err="failed to get container status \"4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419\": rpc error: code = NotFound desc = could not find container \"4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419\": container with ID starting with 4d6c05815a4ccce7125f02e62bb6eb22e5d591aea64da79c5c940a36ac7a7419 not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.456430 4810 scope.go:117] "RemoveContainer" containerID="561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.456747 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892\": container with ID starting with 561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892 not found: ID does not exist" containerID="561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.456808 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892"} err="failed to get container status \"561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892\": rpc error: code = NotFound desc = could not find container \"561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892\": container with ID starting with 561a640d039971120012fc32fae2ad63b15e794870a2bfb98e1bc57d31bd6892 not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.456851 4810 scope.go:117] "RemoveContainer" containerID="a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.457250 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1\": container with ID starting with a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1 not found: ID does not exist" containerID="a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.457291 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1"} err="failed to get container status \"a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1\": rpc error: code = NotFound desc = could not find container \"a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1\": container with ID starting with a471594a24f6c04ffd45f65413594d956b801ba81c51eebee2804e7472c524b1 not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.457320 4810 scope.go:117] "RemoveContainer" containerID="334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.457660 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242\": container with ID starting with 334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242 not found: ID does not exist" containerID="334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.457690 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242"} err="failed to get container status \"334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242\": rpc error: code = NotFound desc = could not find container \"334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242\": container with ID starting with 334271258c82dd525dc61b240cd122a3b14ca81ce7673cebb9aa2605fb2aa242 not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.457708 4810 scope.go:117] "RemoveContainer" containerID="ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.457964 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28\": container with ID starting with ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28 not found: ID does not exist" containerID="ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.458003 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28"} err="failed to get container status \"ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28\": rpc error: code = NotFound desc = could not find container \"ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28\": container with ID starting with ac4439272dd5f448a3c541797c60d7bb154923babfcfe81ab6729ba4f44d4d28 not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.458027 4810 scope.go:117] "RemoveContainer" containerID="5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.458655 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169\": container with ID starting with 5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169 not found: ID does not exist" containerID="5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.458703 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169"} err="failed to get container status \"5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169\": rpc error: code = NotFound desc = could not find container \"5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169\": container with ID starting with 5ff384d32c8d75a73ee6531a498166a9a92a041873b8c7ad5e05a895c22e3169 not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.458733 4810 scope.go:117] "RemoveContainer" containerID="5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.459096 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32\": container with ID starting with 5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32 not found: ID does not exist" containerID="5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.459129 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32"} err="failed to get container status \"5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32\": rpc error: code = NotFound desc = could not find container \"5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32\": container with ID starting with 5eb0615f1e25d8ed10333372554ae24ceed057df26415da984fe73b356536f32 not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.459178 4810 scope.go:117] "RemoveContainer" containerID="e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.459572 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d\": container with ID starting with e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d not found: ID does not exist" containerID="e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.459614 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d"} err="failed to get container status \"e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d\": rpc error: code = NotFound desc = could not find container \"e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d\": container with ID starting with e771edc6b563d0733ff425f35a73b560fc0d0c6a71e67c7bda204451260b072d not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.459641 4810 scope.go:117] "RemoveContainer" containerID="dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.459876 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3\": container with ID starting with dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3 not found: ID does not exist" containerID="dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.459906 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3"} err="failed to get container status \"dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3\": rpc error: code = NotFound desc = could not find container \"dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3\": container with ID starting with dd20aaafc6222606d615195d877cf3ecc39e882d13d02321b14e25af5503caa3 not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.459926 4810 scope.go:117] "RemoveContainer" containerID="5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.460212 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e\": container with ID starting with 5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e not found: ID does not exist" containerID="5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.460240 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e"} err="failed to get container status \"5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e\": rpc error: code = NotFound desc = could not find container \"5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e\": container with ID starting with 5e7c68436cd97b1f593ab9264c278df29b8e80ac51b4e098d14dd124ee1b2c1e not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.460258 4810 scope.go:117] "RemoveContainer" containerID="f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.460521 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c\": container with ID starting with f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c not found: ID does not exist" containerID="f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.460556 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c"} err="failed to get container status \"f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c\": rpc error: code = NotFound desc = could not find container \"f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c\": container with ID starting with f466112f5cf5c677998140264a00bdc4942d3665dc6890fbb4d02a14e09d0c8c not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.460576 4810 scope.go:117] "RemoveContainer" containerID="4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.460863 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701\": container with ID starting with 4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701 not found: ID does not exist" containerID="4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.460894 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701"} err="failed to get container status \"4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701\": rpc error: code = NotFound desc = could not find container \"4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701\": container with ID starting with 4e53625d93776d7465cf9078276b05e0d5f2a21d29f7ea53f7a482b7f6360701 not found: ID does not exist" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.460912 4810 scope.go:117] "RemoveContainer" containerID="db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679" Jan 10 07:11:20 crc kubenswrapper[4810]: E0110 07:11:20.461481 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679\": container with ID starting with db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679 not found: ID does not exist" containerID="db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679" Jan 10 07:11:20 crc kubenswrapper[4810]: I0110 07:11:20.461510 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679"} err="failed to get container status \"db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679\": rpc error: code = NotFound desc = could not find container \"db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679\": container with ID starting with db11dec07421fe11e9e4e60d260e06d925f1267e038faf5054b627f787c62679 not found: ID does not exist" Jan 10 07:11:21 crc kubenswrapper[4810]: I0110 07:11:21.702285 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" path="/var/lib/kubelet/pods/1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1/volumes" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.336616 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.336893 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d19d63-87c8-4b90-9c8f-19de9a4c024d" containerName="swift-ring-rebalance" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.336916 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d19d63-87c8-4b90-9c8f-19de9a4c024d" containerName="swift-ring-rebalance" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.336926 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-updater" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.336932 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-updater" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.336942 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="rsync" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.336948 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="rsync" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.336958 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-server" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.336964 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-server" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.336972 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-replicator" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.336979 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-replicator" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.336989 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f138ae-b546-44c2-aca7-b1f9efabf15f" containerName="proxy-httpd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.336994 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f138ae-b546-44c2-aca7-b1f9efabf15f" containerName="proxy-httpd" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337003 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-auditor" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337011 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-auditor" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337028 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-replicator" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337036 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-replicator" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337044 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-reaper" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337050 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-reaper" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337058 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-auditor" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337064 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-auditor" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337074 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-updater" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337080 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-updater" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337090 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f138ae-b546-44c2-aca7-b1f9efabf15f" containerName="proxy-server" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337096 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f138ae-b546-44c2-aca7-b1f9efabf15f" containerName="proxy-server" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337103 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-auditor" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337108 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-auditor" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337118 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-server" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337124 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-server" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337136 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-replicator" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337142 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-replicator" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337150 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="swift-recon-cron" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337156 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="swift-recon-cron" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337163 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-expirer" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337169 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-expirer" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.337176 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-server" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337181 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-server" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337311 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-server" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337319 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f138ae-b546-44c2-aca7-b1f9efabf15f" containerName="proxy-server" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337326 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-replicator" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337334 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-auditor" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337343 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-expirer" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337353 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-auditor" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337360 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-server" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337369 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="swift-recon-cron" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337381 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-auditor" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337386 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="rsync" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337393 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-reaper" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337401 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d19d63-87c8-4b90-9c8f-19de9a4c024d" containerName="swift-ring-rebalance" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337408 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f138ae-b546-44c2-aca7-b1f9efabf15f" containerName="proxy-httpd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337414 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-server" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337421 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="account-replicator" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337430 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="object-updater" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337435 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-updater" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.337442 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e6a77fa-3a8f-4dc0-abc0-35cc22f00db1" containerName="container-replicator" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.340948 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.343709 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-storage-config-data" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.343933 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-conf" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.344605 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-files" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.347466 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-swift-dockercfg-wznrf" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.376134 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.437835 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-cache\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.437878 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-lock\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.437916 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.437967 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89qwj\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-kube-api-access-89qwj\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.437990 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.538885 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-cache\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.538963 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-lock\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.539027 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.539053 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89qwj\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-kube-api-access-89qwj\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.539084 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.539164 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.539184 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.539268 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift podName:34ed3b6f-8748-4774-9a9e-ea06242039f1 nodeName:}" failed. No retries permitted until 2026-01-10 07:11:23.03924881 +0000 UTC m=+1511.654741783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift") pod "swift-storage-0" (UID: "34ed3b6f-8748-4774-9a9e-ea06242039f1") : configmap "swift-ring-files" not found Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.539546 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-cache\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.539575 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-lock\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.539826 4810 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") device mount path \"/mnt/openstack/pv01\"" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.568732 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x"] Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.570078 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.571460 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.579623 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"cert-swift-internal-svc" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.579770 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"combined-ca-bundle" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.579803 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"swift-proxy-config-data" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.580280 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"cert-swift-public-svc" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.586079 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89qwj\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-kube-api-access-89qwj\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.590002 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x"] Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.639998 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9h2j\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-kube-api-access-k9h2j\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.640046 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-internal-tls-certs\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.640086 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-public-tls-certs\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.640125 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-config-data\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.640181 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-log-httpd\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.640306 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.640359 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-run-httpd\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.640413 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-combined-ca-bundle\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.723817 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-vw5zd"] Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.724843 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.726489 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-config-data" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.727584 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"swift-kuttl-tests"/"swift-ring-scripts" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.741550 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-internal-tls-certs\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.741645 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-public-tls-certs\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.741729 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-config-data\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.741866 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-log-httpd\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.741920 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.741955 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-run-httpd\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.742010 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-combined-ca-bundle\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.742055 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9h2j\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-kube-api-access-k9h2j\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.743846 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-vw5zd"] Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.744402 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-log-httpd\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.775157 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-run-httpd\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.775308 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.775327 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x: configmap "swift-ring-files" not found Jan 10 07:11:22 crc kubenswrapper[4810]: E0110 07:11:22.775379 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift podName:883bb1e8-674b-45a1-8f13-ac670d125b9e nodeName:}" failed. No retries permitted until 2026-01-10 07:11:23.275354154 +0000 UTC m=+1511.890847037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift") pod "swift-proxy-fdbc998f6-xrq6x" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e") : configmap "swift-ring-files" not found Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.780305 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-internal-tls-certs\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.780500 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-config-data\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.781217 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-combined-ca-bundle\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.785030 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-public-tls-certs\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.804119 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9h2j\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-kube-api-access-k9h2j\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.842949 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-combined-ca-bundle\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.843038 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d620acf8-286c-4110-8f64-69195e8f3c1f-etc-swift\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.843112 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-dispersionconf\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.843171 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzkrp\" (UniqueName: \"kubernetes.io/projected/d620acf8-286c-4110-8f64-69195e8f3c1f-kube-api-access-jzkrp\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.843284 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-ring-data-devices\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.843333 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-swiftconf\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.843969 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-scripts\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.944816 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzkrp\" (UniqueName: \"kubernetes.io/projected/d620acf8-286c-4110-8f64-69195e8f3c1f-kube-api-access-jzkrp\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.944962 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-ring-data-devices\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.945011 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-swiftconf\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.945109 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-scripts\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.945239 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-combined-ca-bundle\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.945397 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d620acf8-286c-4110-8f64-69195e8f3c1f-etc-swift\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.945437 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-dispersionconf\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.945852 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-ring-data-devices\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.946008 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-scripts\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.946438 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d620acf8-286c-4110-8f64-69195e8f3c1f-etc-swift\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.949008 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-combined-ca-bundle\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.954116 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-swiftconf\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.954552 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-dispersionconf\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:22 crc kubenswrapper[4810]: I0110 07:11:22.987580 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzkrp\" (UniqueName: \"kubernetes.io/projected/d620acf8-286c-4110-8f64-69195e8f3c1f-kube-api-access-jzkrp\") pod \"swift-ring-rebalance-vw5zd\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:23 crc kubenswrapper[4810]: I0110 07:11:23.046865 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:23 crc kubenswrapper[4810]: E0110 07:11:23.047015 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:11:23 crc kubenswrapper[4810]: E0110 07:11:23.047485 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:11:23 crc kubenswrapper[4810]: E0110 07:11:23.047559 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift podName:34ed3b6f-8748-4774-9a9e-ea06242039f1 nodeName:}" failed. No retries permitted until 2026-01-10 07:11:24.047535389 +0000 UTC m=+1512.663028312 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift") pod "swift-storage-0" (UID: "34ed3b6f-8748-4774-9a9e-ea06242039f1") : configmap "swift-ring-files" not found Jan 10 07:11:23 crc kubenswrapper[4810]: I0110 07:11:23.139496 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:23 crc kubenswrapper[4810]: I0110 07:11:23.354102 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:23 crc kubenswrapper[4810]: E0110 07:11:23.354314 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:11:23 crc kubenswrapper[4810]: E0110 07:11:23.354343 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x: configmap "swift-ring-files" not found Jan 10 07:11:23 crc kubenswrapper[4810]: E0110 07:11:23.354405 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift podName:883bb1e8-674b-45a1-8f13-ac670d125b9e nodeName:}" failed. No retries permitted until 2026-01-10 07:11:24.354386215 +0000 UTC m=+1512.969879098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift") pod "swift-proxy-fdbc998f6-xrq6x" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e") : configmap "swift-ring-files" not found Jan 10 07:11:23 crc kubenswrapper[4810]: I0110 07:11:23.563351 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-vw5zd"] Jan 10 07:11:23 crc kubenswrapper[4810]: W0110 07:11:23.581969 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd620acf8_286c_4110_8f64_69195e8f3c1f.slice/crio-82925cd8c15a3e6166d219deb63ab3feb72c84dbaaca0e5ef3729d5f3dd3bbb5 WatchSource:0}: Error finding container 82925cd8c15a3e6166d219deb63ab3feb72c84dbaaca0e5ef3729d5f3dd3bbb5: Status 404 returned error can't find the container with id 82925cd8c15a3e6166d219deb63ab3feb72c84dbaaca0e5ef3729d5f3dd3bbb5 Jan 10 07:11:24 crc kubenswrapper[4810]: I0110 07:11:24.064085 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:24 crc kubenswrapper[4810]: E0110 07:11:24.064282 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:11:24 crc kubenswrapper[4810]: E0110 07:11:24.064329 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:11:24 crc kubenswrapper[4810]: E0110 07:11:24.064391 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift podName:34ed3b6f-8748-4774-9a9e-ea06242039f1 nodeName:}" failed. No retries permitted until 2026-01-10 07:11:26.064374255 +0000 UTC m=+1514.679867148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift") pod "swift-storage-0" (UID: "34ed3b6f-8748-4774-9a9e-ea06242039f1") : configmap "swift-ring-files" not found Jan 10 07:11:24 crc kubenswrapper[4810]: I0110 07:11:24.191748 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" event={"ID":"d620acf8-286c-4110-8f64-69195e8f3c1f","Type":"ContainerStarted","Data":"aec6c2dbe10e2ad998aa2121590251241d3f613f7290dedeabc60f8d5ebd564a"} Jan 10 07:11:24 crc kubenswrapper[4810]: I0110 07:11:24.192168 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" event={"ID":"d620acf8-286c-4110-8f64-69195e8f3c1f","Type":"ContainerStarted","Data":"82925cd8c15a3e6166d219deb63ab3feb72c84dbaaca0e5ef3729d5f3dd3bbb5"} Jan 10 07:11:24 crc kubenswrapper[4810]: I0110 07:11:24.220576 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" podStartSLOduration=2.220556728 podStartE2EDuration="2.220556728s" podCreationTimestamp="2026-01-10 07:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:11:24.215066166 +0000 UTC m=+1512.830559089" watchObservedRunningTime="2026-01-10 07:11:24.220556728 +0000 UTC m=+1512.836049621" Jan 10 07:11:24 crc kubenswrapper[4810]: I0110 07:11:24.370452 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:24 crc kubenswrapper[4810]: E0110 07:11:24.370757 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:11:24 crc kubenswrapper[4810]: E0110 07:11:24.370778 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x: configmap "swift-ring-files" not found Jan 10 07:11:24 crc kubenswrapper[4810]: E0110 07:11:24.370842 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift podName:883bb1e8-674b-45a1-8f13-ac670d125b9e nodeName:}" failed. No retries permitted until 2026-01-10 07:11:26.370820249 +0000 UTC m=+1514.986313142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift") pod "swift-proxy-fdbc998f6-xrq6x" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e") : configmap "swift-ring-files" not found Jan 10 07:11:26 crc kubenswrapper[4810]: I0110 07:11:26.097523 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:26 crc kubenswrapper[4810]: E0110 07:11:26.097718 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:11:26 crc kubenswrapper[4810]: E0110 07:11:26.097755 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:11:26 crc kubenswrapper[4810]: E0110 07:11:26.097842 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift podName:34ed3b6f-8748-4774-9a9e-ea06242039f1 nodeName:}" failed. No retries permitted until 2026-01-10 07:11:30.097814849 +0000 UTC m=+1518.713307762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift") pod "swift-storage-0" (UID: "34ed3b6f-8748-4774-9a9e-ea06242039f1") : configmap "swift-ring-files" not found Jan 10 07:11:26 crc kubenswrapper[4810]: I0110 07:11:26.403522 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:26 crc kubenswrapper[4810]: E0110 07:11:26.403722 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:11:26 crc kubenswrapper[4810]: E0110 07:11:26.404006 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x: configmap "swift-ring-files" not found Jan 10 07:11:26 crc kubenswrapper[4810]: E0110 07:11:26.404094 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift podName:883bb1e8-674b-45a1-8f13-ac670d125b9e nodeName:}" failed. No retries permitted until 2026-01-10 07:11:30.404070789 +0000 UTC m=+1519.019563682 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift") pod "swift-proxy-fdbc998f6-xrq6x" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e") : configmap "swift-ring-files" not found Jan 10 07:11:30 crc kubenswrapper[4810]: I0110 07:11:30.164341 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:30 crc kubenswrapper[4810]: E0110 07:11:30.164568 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:11:30 crc kubenswrapper[4810]: E0110 07:11:30.164840 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 10 07:11:30 crc kubenswrapper[4810]: E0110 07:11:30.164909 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift podName:34ed3b6f-8748-4774-9a9e-ea06242039f1 nodeName:}" failed. No retries permitted until 2026-01-10 07:11:38.164886271 +0000 UTC m=+1526.780379164 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift") pod "swift-storage-0" (UID: "34ed3b6f-8748-4774-9a9e-ea06242039f1") : configmap "swift-ring-files" not found Jan 10 07:11:30 crc kubenswrapper[4810]: I0110 07:11:30.468329 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:30 crc kubenswrapper[4810]: E0110 07:11:30.468532 4810 projected.go:288] Couldn't get configMap swift-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 10 07:11:30 crc kubenswrapper[4810]: E0110 07:11:30.468748 4810 projected.go:194] Error preparing data for projected volume etc-swift for pod swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x: configmap "swift-ring-files" not found Jan 10 07:11:30 crc kubenswrapper[4810]: E0110 07:11:30.468821 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift podName:883bb1e8-674b-45a1-8f13-ac670d125b9e nodeName:}" failed. No retries permitted until 2026-01-10 07:11:38.468797536 +0000 UTC m=+1527.084290419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift") pod "swift-proxy-fdbc998f6-xrq6x" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e") : configmap "swift-ring-files" not found Jan 10 07:11:32 crc kubenswrapper[4810]: I0110 07:11:32.341378 4810 scope.go:117] "RemoveContainer" containerID="c4b4aa3cf95cfef3de65cc4b9219ddcf3f5780f99f3a2d029ccbe580282dff46" Jan 10 07:11:34 crc kubenswrapper[4810]: I0110 07:11:34.286938 4810 generic.go:334] "Generic (PLEG): container finished" podID="d620acf8-286c-4110-8f64-69195e8f3c1f" containerID="aec6c2dbe10e2ad998aa2121590251241d3f613f7290dedeabc60f8d5ebd564a" exitCode=0 Jan 10 07:11:34 crc kubenswrapper[4810]: I0110 07:11:34.287081 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" event={"ID":"d620acf8-286c-4110-8f64-69195e8f3c1f","Type":"ContainerDied","Data":"aec6c2dbe10e2ad998aa2121590251241d3f613f7290dedeabc60f8d5ebd564a"} Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.652789 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.746355 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-scripts\") pod \"d620acf8-286c-4110-8f64-69195e8f3c1f\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.746451 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-dispersionconf\") pod \"d620acf8-286c-4110-8f64-69195e8f3c1f\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.746522 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-ring-data-devices\") pod \"d620acf8-286c-4110-8f64-69195e8f3c1f\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.746585 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d620acf8-286c-4110-8f64-69195e8f3c1f-etc-swift\") pod \"d620acf8-286c-4110-8f64-69195e8f3c1f\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.747544 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzkrp\" (UniqueName: \"kubernetes.io/projected/d620acf8-286c-4110-8f64-69195e8f3c1f-kube-api-access-jzkrp\") pod \"d620acf8-286c-4110-8f64-69195e8f3c1f\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.747637 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-swiftconf\") pod \"d620acf8-286c-4110-8f64-69195e8f3c1f\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.747723 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-combined-ca-bundle\") pod \"d620acf8-286c-4110-8f64-69195e8f3c1f\" (UID: \"d620acf8-286c-4110-8f64-69195e8f3c1f\") " Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.749399 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d620acf8-286c-4110-8f64-69195e8f3c1f-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d620acf8-286c-4110-8f64-69195e8f3c1f" (UID: "d620acf8-286c-4110-8f64-69195e8f3c1f"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.749606 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d620acf8-286c-4110-8f64-69195e8f3c1f" (UID: "d620acf8-286c-4110-8f64-69195e8f3c1f"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.757410 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d620acf8-286c-4110-8f64-69195e8f3c1f-kube-api-access-jzkrp" (OuterVolumeSpecName: "kube-api-access-jzkrp") pod "d620acf8-286c-4110-8f64-69195e8f3c1f" (UID: "d620acf8-286c-4110-8f64-69195e8f3c1f"). InnerVolumeSpecName "kube-api-access-jzkrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.772765 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d620acf8-286c-4110-8f64-69195e8f3c1f" (UID: "d620acf8-286c-4110-8f64-69195e8f3c1f"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.773226 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d620acf8-286c-4110-8f64-69195e8f3c1f" (UID: "d620acf8-286c-4110-8f64-69195e8f3c1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.787425 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d620acf8-286c-4110-8f64-69195e8f3c1f" (UID: "d620acf8-286c-4110-8f64-69195e8f3c1f"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.788543 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-scripts" (OuterVolumeSpecName: "scripts") pod "d620acf8-286c-4110-8f64-69195e8f3c1f" (UID: "d620acf8-286c-4110-8f64-69195e8f3c1f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.849933 4810 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.850249 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.850345 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.850421 4810 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d620acf8-286c-4110-8f64-69195e8f3c1f-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.850500 4810 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d620acf8-286c-4110-8f64-69195e8f3c1f-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.850575 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d620acf8-286c-4110-8f64-69195e8f3c1f-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:35 crc kubenswrapper[4810]: I0110 07:11:35.850667 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzkrp\" (UniqueName: \"kubernetes.io/projected/d620acf8-286c-4110-8f64-69195e8f3c1f-kube-api-access-jzkrp\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:36 crc kubenswrapper[4810]: I0110 07:11:36.307398 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" event={"ID":"d620acf8-286c-4110-8f64-69195e8f3c1f","Type":"ContainerDied","Data":"82925cd8c15a3e6166d219deb63ab3feb72c84dbaaca0e5ef3729d5f3dd3bbb5"} Jan 10 07:11:36 crc kubenswrapper[4810]: I0110 07:11:36.307455 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82925cd8c15a3e6166d219deb63ab3feb72c84dbaaca0e5ef3729d5f3dd3bbb5" Jan 10 07:11:36 crc kubenswrapper[4810]: I0110 07:11:36.307470 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-ring-rebalance-vw5zd" Jan 10 07:11:38 crc kubenswrapper[4810]: I0110 07:11:38.187687 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:38 crc kubenswrapper[4810]: I0110 07:11:38.196558 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift\") pod \"swift-storage-0\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:38 crc kubenswrapper[4810]: I0110 07:11:38.273669 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:11:38 crc kubenswrapper[4810]: I0110 07:11:38.493294 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:38 crc kubenswrapper[4810]: I0110 07:11:38.501976 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift\") pod \"swift-proxy-fdbc998f6-xrq6x\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:38 crc kubenswrapper[4810]: I0110 07:11:38.537574 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:38 crc kubenswrapper[4810]: I0110 07:11:38.769476 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x"] Jan 10 07:11:38 crc kubenswrapper[4810]: I0110 07:11:38.823204 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:11:39 crc kubenswrapper[4810]: I0110 07:11:39.336659 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" event={"ID":"883bb1e8-674b-45a1-8f13-ac670d125b9e","Type":"ContainerStarted","Data":"75709047ba9180a053c8a0d6b534d6a401e0064184da49a16130e4d6fddf69c6"} Jan 10 07:11:39 crc kubenswrapper[4810]: I0110 07:11:39.337913 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"dfc74341c1d3561631866f7b256ca5c960af6bd0edcd6024ffe692051396188c"} Jan 10 07:11:40 crc kubenswrapper[4810]: I0110 07:11:40.346336 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" event={"ID":"883bb1e8-674b-45a1-8f13-ac670d125b9e","Type":"ContainerStarted","Data":"a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1"} Jan 10 07:11:40 crc kubenswrapper[4810]: I0110 07:11:40.346694 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" event={"ID":"883bb1e8-674b-45a1-8f13-ac670d125b9e","Type":"ContainerStarted","Data":"e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3"} Jan 10 07:11:40 crc kubenswrapper[4810]: I0110 07:11:40.347919 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:40 crc kubenswrapper[4810]: I0110 07:11:40.347955 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:40 crc kubenswrapper[4810]: I0110 07:11:40.352339 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"5cd69498c4c17d2605d1ff79c0515262ed25719ef11d6adf732e054dcad05811"} Jan 10 07:11:40 crc kubenswrapper[4810]: I0110 07:11:40.352393 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"927ceb99b11e6e0043e156d46b8142a6812052c38142452bde3b493015994c1c"} Jan 10 07:11:40 crc kubenswrapper[4810]: I0110 07:11:40.352408 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"a7240cfdd840760a1f4e147eb5492d6c2497f0d86a8a688b8fde701f4f4160c4"} Jan 10 07:11:40 crc kubenswrapper[4810]: I0110 07:11:40.352420 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"2de7d474f5c13f3bcbb421932d104b79c33e79c827e2408b4dca307603dde94b"} Jan 10 07:11:40 crc kubenswrapper[4810]: I0110 07:11:40.352432 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"19105d30aace1aeef1d1cde32e9a6d51e9f2bcc485a4062c236b1e6dbf929150"} Jan 10 07:11:40 crc kubenswrapper[4810]: I0110 07:11:40.352444 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"688af7fdfaca34e9f4e7c02113b1d9471b23a61c9116eeebe3a1fff61619ed9d"} Jan 10 07:11:40 crc kubenswrapper[4810]: I0110 07:11:40.392892 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" podStartSLOduration=18.392870975 podStartE2EDuration="18.392870975s" podCreationTimestamp="2026-01-10 07:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:11:40.382602498 +0000 UTC m=+1528.998095421" watchObservedRunningTime="2026-01-10 07:11:40.392870975 +0000 UTC m=+1529.008363868" Jan 10 07:11:41 crc kubenswrapper[4810]: I0110 07:11:41.365107 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"750abf6afe4388e15620a8f70fbb4248201d556fa702c9f9bb55609803b7326e"} Jan 10 07:11:41 crc kubenswrapper[4810]: I0110 07:11:41.365407 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"f56624402b6b95d8c68e74773f4feb114075ba56bd6ea14d0d72bf3f448d72ed"} Jan 10 07:11:41 crc kubenswrapper[4810]: I0110 07:11:41.365418 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"3972a89d8789094796d7aa3abfc1784a848e161a5b44f6c048be3f2b385d74d8"} Jan 10 07:11:41 crc kubenswrapper[4810]: I0110 07:11:41.365429 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"b39fe8c1ca3aa2848f69c6bfd6a396644e1183d5068ed08735ded4fa1302b0c2"} Jan 10 07:11:42 crc kubenswrapper[4810]: I0110 07:11:42.377635 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"dddbabbe7583f51dc235c0b6dbc5acefc99aa8c863f1a15252f93a8d490de021"} Jan 10 07:11:42 crc kubenswrapper[4810]: I0110 07:11:42.377976 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"18fcca460c247694607a418b23e8da486e95700ebfe58e42b00d65642ae2842c"} Jan 10 07:11:42 crc kubenswrapper[4810]: I0110 07:11:42.377990 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"7f896df056151c14af99f33063e08abd1786888052e5b2b632b2382d225d6947"} Jan 10 07:11:42 crc kubenswrapper[4810]: I0110 07:11:42.378406 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"3fbe827f95c9c90cc45dfaec18ddb9fff51b8575b084ac96db37e5534d33d63a"} Jan 10 07:11:42 crc kubenswrapper[4810]: I0110 07:11:42.378425 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerStarted","Data":"b39aef561d413f22c457415ceb57fbf35b0bcad11b2330457457877b0e6c791d"} Jan 10 07:11:48 crc kubenswrapper[4810]: I0110 07:11:48.547405 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:48 crc kubenswrapper[4810]: I0110 07:11:48.548252 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:48 crc kubenswrapper[4810]: I0110 07:11:48.585937 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/swift-storage-0" podStartSLOduration=27.585918467 podStartE2EDuration="27.585918467s" podCreationTimestamp="2026-01-10 07:11:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:11:42.41048626 +0000 UTC m=+1531.025979153" watchObservedRunningTime="2026-01-10 07:11:48.585918467 +0000 UTC m=+1537.201411360" Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.912836 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-vw5zd"] Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.930250 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-ring-rebalance-vw5zd"] Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.936558 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.936993 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-server" containerID="cri-o://688af7fdfaca34e9f4e7c02113b1d9471b23a61c9116eeebe3a1fff61619ed9d" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937037 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="swift-recon-cron" containerID="cri-o://dddbabbe7583f51dc235c0b6dbc5acefc99aa8c863f1a15252f93a8d490de021" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937122 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="rsync" containerID="cri-o://18fcca460c247694607a418b23e8da486e95700ebfe58e42b00d65642ae2842c" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937142 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-reaper" containerID="cri-o://a7240cfdd840760a1f4e147eb5492d6c2497f0d86a8a688b8fde701f4f4160c4" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937125 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-replicator" containerID="cri-o://5cd69498c4c17d2605d1ff79c0515262ed25719ef11d6adf732e054dcad05811" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937133 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-server" containerID="cri-o://927ceb99b11e6e0043e156d46b8142a6812052c38142452bde3b493015994c1c" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937113 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-auditor" containerID="cri-o://b39fe8c1ca3aa2848f69c6bfd6a396644e1183d5068ed08735ded4fa1302b0c2" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937071 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-updater" containerID="cri-o://3972a89d8789094796d7aa3abfc1784a848e161a5b44f6c048be3f2b385d74d8" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937273 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-replicator" containerID="cri-o://750abf6afe4388e15620a8f70fbb4248201d556fa702c9f9bb55609803b7326e" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937310 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-updater" containerID="cri-o://3fbe827f95c9c90cc45dfaec18ddb9fff51b8575b084ac96db37e5534d33d63a" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937286 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-auditor" containerID="cri-o://2de7d474f5c13f3bcbb421932d104b79c33e79c827e2408b4dca307603dde94b" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937310 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-server" containerID="cri-o://f56624402b6b95d8c68e74773f4feb114075ba56bd6ea14d0d72bf3f448d72ed" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937320 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-expirer" containerID="cri-o://7f896df056151c14af99f33063e08abd1786888052e5b2b632b2382d225d6947" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937275 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-replicator" containerID="cri-o://19105d30aace1aeef1d1cde32e9a6d51e9f2bcc485a4062c236b1e6dbf929150" gracePeriod=30 Jan 10 07:11:49 crc kubenswrapper[4810]: I0110 07:11:49.937331 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-storage-0" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-auditor" containerID="cri-o://b39aef561d413f22c457415ceb57fbf35b0bcad11b2330457457877b0e6c791d" gracePeriod=30 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.057874 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x"] Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.060429 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" podUID="883bb1e8-674b-45a1-8f13-ac670d125b9e" containerName="proxy-httpd" containerID="cri-o://e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3" gracePeriod=30 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.060928 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" podUID="883bb1e8-674b-45a1-8f13-ac670d125b9e" containerName="proxy-server" containerID="cri-o://a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1" gracePeriod=30 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.453223 4810 generic.go:334] "Generic (PLEG): container finished" podID="883bb1e8-674b-45a1-8f13-ac670d125b9e" containerID="e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.453231 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" event={"ID":"883bb1e8-674b-45a1-8f13-ac670d125b9e","Type":"ContainerDied","Data":"e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459002 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="7f896df056151c14af99f33063e08abd1786888052e5b2b632b2382d225d6947" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459030 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="3fbe827f95c9c90cc45dfaec18ddb9fff51b8575b084ac96db37e5534d33d63a" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459039 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="b39aef561d413f22c457415ceb57fbf35b0bcad11b2330457457877b0e6c791d" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459046 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="750abf6afe4388e15620a8f70fbb4248201d556fa702c9f9bb55609803b7326e" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459055 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="f56624402b6b95d8c68e74773f4feb114075ba56bd6ea14d0d72bf3f448d72ed" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459062 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="3972a89d8789094796d7aa3abfc1784a848e161a5b44f6c048be3f2b385d74d8" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459070 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="b39fe8c1ca3aa2848f69c6bfd6a396644e1183d5068ed08735ded4fa1302b0c2" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459077 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="5cd69498c4c17d2605d1ff79c0515262ed25719ef11d6adf732e054dcad05811" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459085 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="927ceb99b11e6e0043e156d46b8142a6812052c38142452bde3b493015994c1c" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459092 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="a7240cfdd840760a1f4e147eb5492d6c2497f0d86a8a688b8fde701f4f4160c4" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459100 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="2de7d474f5c13f3bcbb421932d104b79c33e79c827e2408b4dca307603dde94b" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459107 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="19105d30aace1aeef1d1cde32e9a6d51e9f2bcc485a4062c236b1e6dbf929150" exitCode=0 Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459074 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"7f896df056151c14af99f33063e08abd1786888052e5b2b632b2382d225d6947"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459141 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"3fbe827f95c9c90cc45dfaec18ddb9fff51b8575b084ac96db37e5534d33d63a"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459155 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"b39aef561d413f22c457415ceb57fbf35b0bcad11b2330457457877b0e6c791d"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459164 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"750abf6afe4388e15620a8f70fbb4248201d556fa702c9f9bb55609803b7326e"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459172 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"f56624402b6b95d8c68e74773f4feb114075ba56bd6ea14d0d72bf3f448d72ed"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459181 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"3972a89d8789094796d7aa3abfc1784a848e161a5b44f6c048be3f2b385d74d8"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459189 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"b39fe8c1ca3aa2848f69c6bfd6a396644e1183d5068ed08735ded4fa1302b0c2"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459213 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"5cd69498c4c17d2605d1ff79c0515262ed25719ef11d6adf732e054dcad05811"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459222 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"927ceb99b11e6e0043e156d46b8142a6812052c38142452bde3b493015994c1c"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459238 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"a7240cfdd840760a1f4e147eb5492d6c2497f0d86a8a688b8fde701f4f4160c4"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459247 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"2de7d474f5c13f3bcbb421932d104b79c33e79c827e2408b4dca307603dde94b"} Jan 10 07:11:50 crc kubenswrapper[4810]: I0110 07:11:50.459257 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"19105d30aace1aeef1d1cde32e9a6d51e9f2bcc485a4062c236b1e6dbf929150"} Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.018021 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.117579 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-run-httpd\") pod \"883bb1e8-674b-45a1-8f13-ac670d125b9e\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.117918 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift\") pod \"883bb1e8-674b-45a1-8f13-ac670d125b9e\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.118055 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "883bb1e8-674b-45a1-8f13-ac670d125b9e" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.118063 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9h2j\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-kube-api-access-k9h2j\") pod \"883bb1e8-674b-45a1-8f13-ac670d125b9e\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.118166 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-config-data\") pod \"883bb1e8-674b-45a1-8f13-ac670d125b9e\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.118272 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-internal-tls-certs\") pod \"883bb1e8-674b-45a1-8f13-ac670d125b9e\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.118346 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-combined-ca-bundle\") pod \"883bb1e8-674b-45a1-8f13-ac670d125b9e\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.118381 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-log-httpd\") pod \"883bb1e8-674b-45a1-8f13-ac670d125b9e\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.118403 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-public-tls-certs\") pod \"883bb1e8-674b-45a1-8f13-ac670d125b9e\" (UID: \"883bb1e8-674b-45a1-8f13-ac670d125b9e\") " Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.118739 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "883bb1e8-674b-45a1-8f13-ac670d125b9e" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.118813 4810 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.123383 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "883bb1e8-674b-45a1-8f13-ac670d125b9e" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.126460 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-kube-api-access-k9h2j" (OuterVolumeSpecName: "kube-api-access-k9h2j") pod "883bb1e8-674b-45a1-8f13-ac670d125b9e" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e"). InnerVolumeSpecName "kube-api-access-k9h2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.155469 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-config-data" (OuterVolumeSpecName: "config-data") pod "883bb1e8-674b-45a1-8f13-ac670d125b9e" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.161561 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "883bb1e8-674b-45a1-8f13-ac670d125b9e" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.171455 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "883bb1e8-674b-45a1-8f13-ac670d125b9e" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.194710 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "883bb1e8-674b-45a1-8f13-ac670d125b9e" (UID: "883bb1e8-674b-45a1-8f13-ac670d125b9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.220302 4810 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.220334 4810 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/883bb1e8-674b-45a1-8f13-ac670d125b9e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.220343 4810 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.220351 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.220359 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9h2j\" (UniqueName: \"kubernetes.io/projected/883bb1e8-674b-45a1-8f13-ac670d125b9e-kube-api-access-k9h2j\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.220371 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.220378 4810 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/883bb1e8-674b-45a1-8f13-ac670d125b9e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.476588 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="18fcca460c247694607a418b23e8da486e95700ebfe58e42b00d65642ae2842c" exitCode=0 Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.476624 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="688af7fdfaca34e9f4e7c02113b1d9471b23a61c9116eeebe3a1fff61619ed9d" exitCode=0 Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.476665 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"18fcca460c247694607a418b23e8da486e95700ebfe58e42b00d65642ae2842c"} Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.476710 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"688af7fdfaca34e9f4e7c02113b1d9471b23a61c9116eeebe3a1fff61619ed9d"} Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.478679 4810 generic.go:334] "Generic (PLEG): container finished" podID="883bb1e8-674b-45a1-8f13-ac670d125b9e" containerID="a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1" exitCode=0 Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.478711 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.478726 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" event={"ID":"883bb1e8-674b-45a1-8f13-ac670d125b9e","Type":"ContainerDied","Data":"a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1"} Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.478766 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x" event={"ID":"883bb1e8-674b-45a1-8f13-ac670d125b9e","Type":"ContainerDied","Data":"75709047ba9180a053c8a0d6b534d6a401e0064184da49a16130e4d6fddf69c6"} Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.478785 4810 scope.go:117] "RemoveContainer" containerID="a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.510369 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x"] Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.513425 4810 scope.go:117] "RemoveContainer" containerID="e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.518166 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-proxy-fdbc998f6-xrq6x"] Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.536521 4810 scope.go:117] "RemoveContainer" containerID="a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1" Jan 10 07:11:51 crc kubenswrapper[4810]: E0110 07:11:51.537074 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1\": container with ID starting with a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1 not found: ID does not exist" containerID="a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.537115 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1"} err="failed to get container status \"a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1\": rpc error: code = NotFound desc = could not find container \"a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1\": container with ID starting with a6ce478ad558da6aec32d977fb7b3ce00c94ca5242987c398863ccd32b6282f1 not found: ID does not exist" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.537149 4810 scope.go:117] "RemoveContainer" containerID="e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3" Jan 10 07:11:51 crc kubenswrapper[4810]: E0110 07:11:51.537670 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3\": container with ID starting with e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3 not found: ID does not exist" containerID="e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.537704 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3"} err="failed to get container status \"e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3\": rpc error: code = NotFound desc = could not find container \"e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3\": container with ID starting with e4e5e6365c7c06cf98c1d2f4fe0a2200a83452889d4bc1deb59c21fd079950a3 not found: ID does not exist" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.700451 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883bb1e8-674b-45a1-8f13-ac670d125b9e" path="/var/lib/kubelet/pods/883bb1e8-674b-45a1-8f13-ac670d125b9e/volumes" Jan 10 07:11:51 crc kubenswrapper[4810]: I0110 07:11:51.701244 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d620acf8-286c-4110-8f64-69195e8f3c1f" path="/var/lib/kubelet/pods/d620acf8-286c-4110-8f64-69195e8f3c1f/volumes" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.092754 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gdxlc"] Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.103504 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/root-account-create-update-gdxlc"] Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.485860 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f8ztm"] Jan 10 07:12:07 crc kubenswrapper[4810]: E0110 07:12:07.486186 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883bb1e8-674b-45a1-8f13-ac670d125b9e" containerName="proxy-server" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.486790 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="883bb1e8-674b-45a1-8f13-ac670d125b9e" containerName="proxy-server" Jan 10 07:12:07 crc kubenswrapper[4810]: E0110 07:12:07.486814 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883bb1e8-674b-45a1-8f13-ac670d125b9e" containerName="proxy-httpd" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.486823 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="883bb1e8-674b-45a1-8f13-ac670d125b9e" containerName="proxy-httpd" Jan 10 07:12:07 crc kubenswrapper[4810]: E0110 07:12:07.486849 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d620acf8-286c-4110-8f64-69195e8f3c1f" containerName="swift-ring-rebalance" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.486858 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d620acf8-286c-4110-8f64-69195e8f3c1f" containerName="swift-ring-rebalance" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.487032 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="883bb1e8-674b-45a1-8f13-ac670d125b9e" containerName="proxy-server" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.487060 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d620acf8-286c-4110-8f64-69195e8f3c1f" containerName="swift-ring-rebalance" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.487070 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="883bb1e8-674b-45a1-8f13-ac670d125b9e" containerName="proxy-httpd" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.488467 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.501076 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f8ztm"] Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.660507 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-utilities\") pod \"community-operators-f8ztm\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.660670 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-catalog-content\") pod \"community-operators-f8ztm\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.660730 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj4kz\" (UniqueName: \"kubernetes.io/projected/b4d62155-9354-4007-b2c6-6074e9fcd8eb-kube-api-access-wj4kz\") pod \"community-operators-f8ztm\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.702722 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de0b1f3d-7fc9-4771-81c5-6722593deb1e" path="/var/lib/kubelet/pods/de0b1f3d-7fc9-4771-81c5-6722593deb1e/volumes" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.762174 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-utilities\") pod \"community-operators-f8ztm\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.762272 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-catalog-content\") pod \"community-operators-f8ztm\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.762301 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj4kz\" (UniqueName: \"kubernetes.io/projected/b4d62155-9354-4007-b2c6-6074e9fcd8eb-kube-api-access-wj4kz\") pod \"community-operators-f8ztm\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.762685 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-utilities\") pod \"community-operators-f8ztm\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.762807 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-catalog-content\") pod \"community-operators-f8ztm\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.797281 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj4kz\" (UniqueName: \"kubernetes.io/projected/b4d62155-9354-4007-b2c6-6074e9fcd8eb-kube-api-access-wj4kz\") pod \"community-operators-f8ztm\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:07 crc kubenswrapper[4810]: I0110 07:12:07.814079 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:08 crc kubenswrapper[4810]: I0110 07:12:08.316456 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f8ztm"] Jan 10 07:12:08 crc kubenswrapper[4810]: I0110 07:12:08.623280 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ztm" event={"ID":"b4d62155-9354-4007-b2c6-6074e9fcd8eb","Type":"ContainerStarted","Data":"3e7baaa427739c6e7d5fe3be372f594e638d0ca7f9d753cb489fe5807517434e"} Jan 10 07:12:09 crc kubenswrapper[4810]: I0110 07:12:09.631602 4810 generic.go:334] "Generic (PLEG): container finished" podID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" containerID="5e8cd69942e4bf30cf79fc5edc23e22d0cd2ee295dd39ba678c0c55557550cc1" exitCode=0 Jan 10 07:12:09 crc kubenswrapper[4810]: I0110 07:12:09.631666 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ztm" event={"ID":"b4d62155-9354-4007-b2c6-6074e9fcd8eb","Type":"ContainerDied","Data":"5e8cd69942e4bf30cf79fc5edc23e22d0cd2ee295dd39ba678c0c55557550cc1"} Jan 10 07:12:09 crc kubenswrapper[4810]: I0110 07:12:09.633362 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 10 07:12:12 crc kubenswrapper[4810]: I0110 07:12:12.658904 4810 generic.go:334] "Generic (PLEG): container finished" podID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" containerID="40e46da4b20b744506dbaaa38b004bf311c6c2f86006260b61ba134dd79ac3e6" exitCode=0 Jan 10 07:12:12 crc kubenswrapper[4810]: I0110 07:12:12.658966 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ztm" event={"ID":"b4d62155-9354-4007-b2c6-6074e9fcd8eb","Type":"ContainerDied","Data":"40e46da4b20b744506dbaaa38b004bf311c6c2f86006260b61ba134dd79ac3e6"} Jan 10 07:12:13 crc kubenswrapper[4810]: I0110 07:12:13.669132 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ztm" event={"ID":"b4d62155-9354-4007-b2c6-6074e9fcd8eb","Type":"ContainerStarted","Data":"340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b"} Jan 10 07:12:13 crc kubenswrapper[4810]: I0110 07:12:13.691888 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f8ztm" podStartSLOduration=2.984406698 podStartE2EDuration="6.691869605s" podCreationTimestamp="2026-01-10 07:12:07 +0000 UTC" firstStartedPulling="2026-01-10 07:12:09.63306805 +0000 UTC m=+1558.248560933" lastFinishedPulling="2026-01-10 07:12:13.340530957 +0000 UTC m=+1561.956023840" observedRunningTime="2026-01-10 07:12:13.687642504 +0000 UTC m=+1562.303135387" watchObservedRunningTime="2026-01-10 07:12:13.691869605 +0000 UTC m=+1562.307362498" Jan 10 07:12:17 crc kubenswrapper[4810]: I0110 07:12:17.815094 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:17 crc kubenswrapper[4810]: I0110 07:12:17.815487 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:17 crc kubenswrapper[4810]: I0110 07:12:17.873617 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:18 crc kubenswrapper[4810]: I0110 07:12:18.747748 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:18 crc kubenswrapper[4810]: I0110 07:12:18.796340 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f8ztm"] Jan 10 07:12:20 crc kubenswrapper[4810]: I0110 07:12:20.729100 4810 generic.go:334] "Generic (PLEG): container finished" podID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerID="dddbabbe7583f51dc235c0b6dbc5acefc99aa8c863f1a15252f93a8d490de021" exitCode=137 Jan 10 07:12:20 crc kubenswrapper[4810]: I0110 07:12:20.730476 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f8ztm" podUID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" containerName="registry-server" containerID="cri-o://340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b" gracePeriod=2 Jan 10 07:12:20 crc kubenswrapper[4810]: I0110 07:12:20.729255 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"dddbabbe7583f51dc235c0b6dbc5acefc99aa8c863f1a15252f93a8d490de021"} Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.191572 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.334468 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.371711 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-utilities\") pod \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.372064 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-catalog-content\") pod \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.372226 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4kz\" (UniqueName: \"kubernetes.io/projected/b4d62155-9354-4007-b2c6-6074e9fcd8eb-kube-api-access-wj4kz\") pod \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\" (UID: \"b4d62155-9354-4007-b2c6-6074e9fcd8eb\") " Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.373866 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-utilities" (OuterVolumeSpecName: "utilities") pod "b4d62155-9354-4007-b2c6-6074e9fcd8eb" (UID: "b4d62155-9354-4007-b2c6-6074e9fcd8eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.378866 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d62155-9354-4007-b2c6-6074e9fcd8eb-kube-api-access-wj4kz" (OuterVolumeSpecName: "kube-api-access-wj4kz") pod "b4d62155-9354-4007-b2c6-6074e9fcd8eb" (UID: "b4d62155-9354-4007-b2c6-6074e9fcd8eb"). InnerVolumeSpecName "kube-api-access-wj4kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.431004 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4d62155-9354-4007-b2c6-6074e9fcd8eb" (UID: "b4d62155-9354-4007-b2c6-6074e9fcd8eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.475144 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89qwj\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-kube-api-access-89qwj\") pod \"34ed3b6f-8748-4774-9a9e-ea06242039f1\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.475265 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"34ed3b6f-8748-4774-9a9e-ea06242039f1\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.475877 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-cache\") pod \"34ed3b6f-8748-4774-9a9e-ea06242039f1\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.475918 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-lock\") pod \"34ed3b6f-8748-4774-9a9e-ea06242039f1\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.476004 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift\") pod \"34ed3b6f-8748-4774-9a9e-ea06242039f1\" (UID: \"34ed3b6f-8748-4774-9a9e-ea06242039f1\") " Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.476312 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-cache" (OuterVolumeSpecName: "cache") pod "34ed3b6f-8748-4774-9a9e-ea06242039f1" (UID: "34ed3b6f-8748-4774-9a9e-ea06242039f1"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.476419 4810 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-cache\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.476437 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.476450 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4d62155-9354-4007-b2c6-6074e9fcd8eb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.476463 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj4kz\" (UniqueName: \"kubernetes.io/projected/b4d62155-9354-4007-b2c6-6074e9fcd8eb-kube-api-access-wj4kz\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.476864 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-lock" (OuterVolumeSpecName: "lock") pod "34ed3b6f-8748-4774-9a9e-ea06242039f1" (UID: "34ed3b6f-8748-4774-9a9e-ea06242039f1"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.479238 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-kube-api-access-89qwj" (OuterVolumeSpecName: "kube-api-access-89qwj") pod "34ed3b6f-8748-4774-9a9e-ea06242039f1" (UID: "34ed3b6f-8748-4774-9a9e-ea06242039f1"). InnerVolumeSpecName "kube-api-access-89qwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.479331 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "34ed3b6f-8748-4774-9a9e-ea06242039f1" (UID: "34ed3b6f-8748-4774-9a9e-ea06242039f1"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.479856 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "swift") pod "34ed3b6f-8748-4774-9a9e-ea06242039f1" (UID: "34ed3b6f-8748-4774-9a9e-ea06242039f1"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.578186 4810 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/34ed3b6f-8748-4774-9a9e-ea06242039f1-lock\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.578245 4810 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.578258 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89qwj\" (UniqueName: \"kubernetes.io/projected/34ed3b6f-8748-4774-9a9e-ea06242039f1-kube-api-access-89qwj\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.578290 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.609264 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.679607 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.743849 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/swift-storage-0" event={"ID":"34ed3b6f-8748-4774-9a9e-ea06242039f1","Type":"ContainerDied","Data":"dfc74341c1d3561631866f7b256ca5c960af6bd0edcd6024ffe692051396188c"} Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.743913 4810 scope.go:117] "RemoveContainer" containerID="dddbabbe7583f51dc235c0b6dbc5acefc99aa8c863f1a15252f93a8d490de021" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.743971 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/swift-storage-0" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.745540 4810 generic.go:334] "Generic (PLEG): container finished" podID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" containerID="340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b" exitCode=0 Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.745589 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ztm" event={"ID":"b4d62155-9354-4007-b2c6-6074e9fcd8eb","Type":"ContainerDied","Data":"340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b"} Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.745623 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f8ztm" event={"ID":"b4d62155-9354-4007-b2c6-6074e9fcd8eb","Type":"ContainerDied","Data":"3e7baaa427739c6e7d5fe3be372f594e638d0ca7f9d753cb489fe5807517434e"} Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.745659 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f8ztm" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.763026 4810 scope.go:117] "RemoveContainer" containerID="18fcca460c247694607a418b23e8da486e95700ebfe58e42b00d65642ae2842c" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.788982 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.791829 4810 scope.go:117] "RemoveContainer" containerID="7f896df056151c14af99f33063e08abd1786888052e5b2b632b2382d225d6947" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.794709 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/swift-storage-0"] Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.799576 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f8ztm"] Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.804719 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f8ztm"] Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.810143 4810 scope.go:117] "RemoveContainer" containerID="3fbe827f95c9c90cc45dfaec18ddb9fff51b8575b084ac96db37e5534d33d63a" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.843569 4810 scope.go:117] "RemoveContainer" containerID="b39aef561d413f22c457415ceb57fbf35b0bcad11b2330457457877b0e6c791d" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.889365 4810 scope.go:117] "RemoveContainer" containerID="750abf6afe4388e15620a8f70fbb4248201d556fa702c9f9bb55609803b7326e" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.906396 4810 scope.go:117] "RemoveContainer" containerID="f56624402b6b95d8c68e74773f4feb114075ba56bd6ea14d0d72bf3f448d72ed" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.919281 4810 scope.go:117] "RemoveContainer" containerID="3972a89d8789094796d7aa3abfc1784a848e161a5b44f6c048be3f2b385d74d8" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.935482 4810 scope.go:117] "RemoveContainer" containerID="b39fe8c1ca3aa2848f69c6bfd6a396644e1183d5068ed08735ded4fa1302b0c2" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.948985 4810 scope.go:117] "RemoveContainer" containerID="5cd69498c4c17d2605d1ff79c0515262ed25719ef11d6adf732e054dcad05811" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.963363 4810 scope.go:117] "RemoveContainer" containerID="927ceb99b11e6e0043e156d46b8142a6812052c38142452bde3b493015994c1c" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.982796 4810 scope.go:117] "RemoveContainer" containerID="a7240cfdd840760a1f4e147eb5492d6c2497f0d86a8a688b8fde701f4f4160c4" Jan 10 07:12:21 crc kubenswrapper[4810]: I0110 07:12:21.997781 4810 scope.go:117] "RemoveContainer" containerID="2de7d474f5c13f3bcbb421932d104b79c33e79c827e2408b4dca307603dde94b" Jan 10 07:12:22 crc kubenswrapper[4810]: I0110 07:12:22.015826 4810 scope.go:117] "RemoveContainer" containerID="19105d30aace1aeef1d1cde32e9a6d51e9f2bcc485a4062c236b1e6dbf929150" Jan 10 07:12:22 crc kubenswrapper[4810]: I0110 07:12:22.037841 4810 scope.go:117] "RemoveContainer" containerID="688af7fdfaca34e9f4e7c02113b1d9471b23a61c9116eeebe3a1fff61619ed9d" Jan 10 07:12:22 crc kubenswrapper[4810]: I0110 07:12:22.058000 4810 scope.go:117] "RemoveContainer" containerID="340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b" Jan 10 07:12:22 crc kubenswrapper[4810]: I0110 07:12:22.075875 4810 scope.go:117] "RemoveContainer" containerID="40e46da4b20b744506dbaaa38b004bf311c6c2f86006260b61ba134dd79ac3e6" Jan 10 07:12:22 crc kubenswrapper[4810]: I0110 07:12:22.098638 4810 scope.go:117] "RemoveContainer" containerID="5e8cd69942e4bf30cf79fc5edc23e22d0cd2ee295dd39ba678c0c55557550cc1" Jan 10 07:12:22 crc kubenswrapper[4810]: I0110 07:12:22.116663 4810 scope.go:117] "RemoveContainer" containerID="340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b" Jan 10 07:12:22 crc kubenswrapper[4810]: E0110 07:12:22.117189 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b\": container with ID starting with 340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b not found: ID does not exist" containerID="340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b" Jan 10 07:12:22 crc kubenswrapper[4810]: I0110 07:12:22.117255 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b"} err="failed to get container status \"340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b\": rpc error: code = NotFound desc = could not find container \"340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b\": container with ID starting with 340e1972035f60c0c9dfd9713cc715c2ae6ebb2e9270feef25fd2fffd936f63b not found: ID does not exist" Jan 10 07:12:22 crc kubenswrapper[4810]: I0110 07:12:22.117289 4810 scope.go:117] "RemoveContainer" containerID="40e46da4b20b744506dbaaa38b004bf311c6c2f86006260b61ba134dd79ac3e6" Jan 10 07:12:22 crc kubenswrapper[4810]: E0110 07:12:22.117679 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40e46da4b20b744506dbaaa38b004bf311c6c2f86006260b61ba134dd79ac3e6\": container with ID starting with 40e46da4b20b744506dbaaa38b004bf311c6c2f86006260b61ba134dd79ac3e6 not found: ID does not exist" containerID="40e46da4b20b744506dbaaa38b004bf311c6c2f86006260b61ba134dd79ac3e6" Jan 10 07:12:22 crc kubenswrapper[4810]: I0110 07:12:22.117705 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40e46da4b20b744506dbaaa38b004bf311c6c2f86006260b61ba134dd79ac3e6"} err="failed to get container status \"40e46da4b20b744506dbaaa38b004bf311c6c2f86006260b61ba134dd79ac3e6\": rpc error: code = NotFound desc = could not find container \"40e46da4b20b744506dbaaa38b004bf311c6c2f86006260b61ba134dd79ac3e6\": container with ID starting with 40e46da4b20b744506dbaaa38b004bf311c6c2f86006260b61ba134dd79ac3e6 not found: ID does not exist" Jan 10 07:12:22 crc kubenswrapper[4810]: I0110 07:12:22.117719 4810 scope.go:117] "RemoveContainer" containerID="5e8cd69942e4bf30cf79fc5edc23e22d0cd2ee295dd39ba678c0c55557550cc1" Jan 10 07:12:22 crc kubenswrapper[4810]: E0110 07:12:22.118127 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e8cd69942e4bf30cf79fc5edc23e22d0cd2ee295dd39ba678c0c55557550cc1\": container with ID starting with 5e8cd69942e4bf30cf79fc5edc23e22d0cd2ee295dd39ba678c0c55557550cc1 not found: ID does not exist" containerID="5e8cd69942e4bf30cf79fc5edc23e22d0cd2ee295dd39ba678c0c55557550cc1" Jan 10 07:12:22 crc kubenswrapper[4810]: I0110 07:12:22.118169 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e8cd69942e4bf30cf79fc5edc23e22d0cd2ee295dd39ba678c0c55557550cc1"} err="failed to get container status \"5e8cd69942e4bf30cf79fc5edc23e22d0cd2ee295dd39ba678c0c55557550cc1\": rpc error: code = NotFound desc = could not find container \"5e8cd69942e4bf30cf79fc5edc23e22d0cd2ee295dd39ba678c0c55557550cc1\": container with ID starting with 5e8cd69942e4bf30cf79fc5edc23e22d0cd2ee295dd39ba678c0c55557550cc1 not found: ID does not exist" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.513569 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lj95q"] Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514070 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-server" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514081 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-server" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514091 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-auditor" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514097 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-auditor" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514107 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-updater" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514114 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-updater" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514122 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-updater" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514129 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-updater" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514141 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="swift-recon-cron" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514147 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="swift-recon-cron" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514158 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-replicator" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514164 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-replicator" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514174 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-replicator" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514180 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-replicator" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514186 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-reaper" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514208 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-reaper" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514218 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-server" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514224 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-server" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514231 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" containerName="extract-content" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514237 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" containerName="extract-content" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514246 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-auditor" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514252 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-auditor" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514262 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="rsync" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514268 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="rsync" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514275 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" containerName="registry-server" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514280 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" containerName="registry-server" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514290 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-replicator" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514295 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-replicator" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514303 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-expirer" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514311 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-expirer" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514320 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-auditor" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514326 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-auditor" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514335 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-server" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514341 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-server" Jan 10 07:12:23 crc kubenswrapper[4810]: E0110 07:12:23.514350 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" containerName="extract-utilities" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514355 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" containerName="extract-utilities" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514461 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="rsync" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514473 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-auditor" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514483 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-server" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514493 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" containerName="registry-server" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514500 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-updater" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514508 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-auditor" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514514 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-auditor" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514521 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="container-replicator" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514527 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-reaper" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514533 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-replicator" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514539 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-server" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514548 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="swift-recon-cron" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514557 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-expirer" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514565 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-server" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514571 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="object-updater" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.514579 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" containerName="account-replicator" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.515453 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.525593 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj95q"] Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.701109 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ed3b6f-8748-4774-9a9e-ea06242039f1" path="/var/lib/kubelet/pods/34ed3b6f-8748-4774-9a9e-ea06242039f1/volumes" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.703552 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4d62155-9354-4007-b2c6-6074e9fcd8eb" path="/var/lib/kubelet/pods/b4d62155-9354-4007-b2c6-6074e9fcd8eb/volumes" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.714075 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-utilities\") pod \"redhat-marketplace-lj95q\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.714348 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfqc2\" (UniqueName: \"kubernetes.io/projected/937bd81b-b395-4267-a2b6-b82c5a90f3d7-kube-api-access-gfqc2\") pod \"redhat-marketplace-lj95q\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.714447 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-catalog-content\") pod \"redhat-marketplace-lj95q\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.816100 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-utilities\") pod \"redhat-marketplace-lj95q\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.816174 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfqc2\" (UniqueName: \"kubernetes.io/projected/937bd81b-b395-4267-a2b6-b82c5a90f3d7-kube-api-access-gfqc2\") pod \"redhat-marketplace-lj95q\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.816260 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-catalog-content\") pod \"redhat-marketplace-lj95q\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.817328 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-utilities\") pod \"redhat-marketplace-lj95q\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.817490 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-catalog-content\") pod \"redhat-marketplace-lj95q\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:23 crc kubenswrapper[4810]: I0110 07:12:23.838213 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfqc2\" (UniqueName: \"kubernetes.io/projected/937bd81b-b395-4267-a2b6-b82c5a90f3d7-kube-api-access-gfqc2\") pod \"redhat-marketplace-lj95q\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:24 crc kubenswrapper[4810]: I0110 07:12:24.133360 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:24 crc kubenswrapper[4810]: I0110 07:12:24.535424 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj95q"] Jan 10 07:12:24 crc kubenswrapper[4810]: I0110 07:12:24.770548 4810 generic.go:334] "Generic (PLEG): container finished" podID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" containerID="bfacc0f29beb4ea98306e445d6cb3ccf037298faa2f95e19f24e8e432ddd9196" exitCode=0 Jan 10 07:12:24 crc kubenswrapper[4810]: I0110 07:12:24.770604 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj95q" event={"ID":"937bd81b-b395-4267-a2b6-b82c5a90f3d7","Type":"ContainerDied","Data":"bfacc0f29beb4ea98306e445d6cb3ccf037298faa2f95e19f24e8e432ddd9196"} Jan 10 07:12:24 crc kubenswrapper[4810]: I0110 07:12:24.770652 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj95q" event={"ID":"937bd81b-b395-4267-a2b6-b82c5a90f3d7","Type":"ContainerStarted","Data":"a6b0d571c087f11b946199f5ece1d74337818484bef1ec21c61da660fab83da7"} Jan 10 07:12:25 crc kubenswrapper[4810]: I0110 07:12:25.780725 4810 generic.go:334] "Generic (PLEG): container finished" podID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" containerID="f32ad3b6685e4af8199eb7f218cd2e9fecdfe079442323a90702978c3a3699c3" exitCode=0 Jan 10 07:12:25 crc kubenswrapper[4810]: I0110 07:12:25.780862 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj95q" event={"ID":"937bd81b-b395-4267-a2b6-b82c5a90f3d7","Type":"ContainerDied","Data":"f32ad3b6685e4af8199eb7f218cd2e9fecdfe079442323a90702978c3a3699c3"} Jan 10 07:12:26 crc kubenswrapper[4810]: I0110 07:12:26.789996 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj95q" event={"ID":"937bd81b-b395-4267-a2b6-b82c5a90f3d7","Type":"ContainerStarted","Data":"b7448c5b5d1b0a3d3b0fe924f7c76171931ba85e061fe3903d549f0162e57615"} Jan 10 07:12:26 crc kubenswrapper[4810]: I0110 07:12:26.813330 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lj95q" podStartSLOduration=2.248605019 podStartE2EDuration="3.813309829s" podCreationTimestamp="2026-01-10 07:12:23 +0000 UTC" firstStartedPulling="2026-01-10 07:12:24.772145771 +0000 UTC m=+1573.387638654" lastFinishedPulling="2026-01-10 07:12:26.336850581 +0000 UTC m=+1574.952343464" observedRunningTime="2026-01-10 07:12:26.807865059 +0000 UTC m=+1575.423357952" watchObservedRunningTime="2026-01-10 07:12:26.813309829 +0000 UTC m=+1575.428802722" Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.044973 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-l684q"] Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.055311 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-db-sync-l684q"] Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.131406 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/barbican2aa3-account-delete-kwwwp"] Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.132536 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.142547 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw"] Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.142938 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" podUID="9ae7f276-37e7-4ef5-831a-4c8b911bfc84" containerName="barbican-worker-log" containerID="cri-o://f5149bac691ab2cf15c4be9553e38e23aa809552c6038100850b95503d844ceb" gracePeriod=30 Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.142971 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" podUID="9ae7f276-37e7-4ef5-831a-4c8b911bfc84" containerName="barbican-worker" containerID="cri-o://f5185552ac3bc1bd54632d70763e412461cc956133d4f5ecf1c39e38b6ad3981" gracePeriod=30 Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.165214 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx"] Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.165681 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" podUID="c53844ae-14bb-4c0b-808e-3a198ad77d3c" containerName="barbican-keystone-listener-log" containerID="cri-o://8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636" gracePeriod=30 Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.165798 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" podUID="c53844ae-14bb-4c0b-808e-3a198ad77d3c" containerName="barbican-keystone-listener" containerID="cri-o://6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f" gracePeriod=30 Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.173624 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-api-867d7c779b-gb8r6"] Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.173876 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" podUID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerName="barbican-api-log" containerID="cri-o://b1c2f09df1781147b6b8ccf4a85b36113f6cbad2bff102b1f466ae4e62836d2b" gracePeriod=30 Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.174020 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" podUID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerName="barbican-api" containerID="cri-o://341a4f04aff29fad2fb9ca7620978f8049217d34ad6b6a718b0d5e6470e17d91" gracePeriod=30 Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.182802 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican2aa3-account-delete-kwwwp"] Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.285099 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7130a15-458b-41ff-9d3b-931ac0890ed9-operator-scripts\") pod \"barbican2aa3-account-delete-kwwwp\" (UID: \"b7130a15-458b-41ff-9d3b-931ac0890ed9\") " pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.285148 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb9rp\" (UniqueName: \"kubernetes.io/projected/b7130a15-458b-41ff-9d3b-931ac0890ed9-kube-api-access-kb9rp\") pod \"barbican2aa3-account-delete-kwwwp\" (UID: \"b7130a15-458b-41ff-9d3b-931ac0890ed9\") " pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.386771 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7130a15-458b-41ff-9d3b-931ac0890ed9-operator-scripts\") pod \"barbican2aa3-account-delete-kwwwp\" (UID: \"b7130a15-458b-41ff-9d3b-931ac0890ed9\") " pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.386822 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb9rp\" (UniqueName: \"kubernetes.io/projected/b7130a15-458b-41ff-9d3b-931ac0890ed9-kube-api-access-kb9rp\") pod \"barbican2aa3-account-delete-kwwwp\" (UID: \"b7130a15-458b-41ff-9d3b-931ac0890ed9\") " pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.387967 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7130a15-458b-41ff-9d3b-931ac0890ed9-operator-scripts\") pod \"barbican2aa3-account-delete-kwwwp\" (UID: \"b7130a15-458b-41ff-9d3b-931ac0890ed9\") " pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.405088 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb9rp\" (UniqueName: \"kubernetes.io/projected/b7130a15-458b-41ff-9d3b-931ac0890ed9-kube-api-access-kb9rp\") pod \"barbican2aa3-account-delete-kwwwp\" (UID: \"b7130a15-458b-41ff-9d3b-931ac0890ed9\") " pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.454027 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.808247 4810 generic.go:334] "Generic (PLEG): container finished" podID="c53844ae-14bb-4c0b-808e-3a198ad77d3c" containerID="8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636" exitCode=143 Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.808313 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" event={"ID":"c53844ae-14bb-4c0b-808e-3a198ad77d3c","Type":"ContainerDied","Data":"8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636"} Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.811387 4810 generic.go:334] "Generic (PLEG): container finished" podID="9ae7f276-37e7-4ef5-831a-4c8b911bfc84" containerID="f5149bac691ab2cf15c4be9553e38e23aa809552c6038100850b95503d844ceb" exitCode=143 Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.811477 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" event={"ID":"9ae7f276-37e7-4ef5-831a-4c8b911bfc84","Type":"ContainerDied","Data":"f5149bac691ab2cf15c4be9553e38e23aa809552c6038100850b95503d844ceb"} Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.814988 4810 generic.go:334] "Generic (PLEG): container finished" podID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerID="b1c2f09df1781147b6b8ccf4a85b36113f6cbad2bff102b1f466ae4e62836d2b" exitCode=143 Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.815022 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" event={"ID":"70f8d132-32ca-40ae-a116-0684c284eb4e","Type":"ContainerDied","Data":"b1c2f09df1781147b6b8ccf4a85b36113f6cbad2bff102b1f466ae4e62836d2b"} Jan 10 07:12:28 crc kubenswrapper[4810]: I0110 07:12:28.889635 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/barbican2aa3-account-delete-kwwwp"] Jan 10 07:12:28 crc kubenswrapper[4810]: W0110 07:12:28.896354 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7130a15_458b_41ff_9d3b_931ac0890ed9.slice/crio-fc7846f26c3abfe742e05f4276f9a2345f8bba4af9888c559b78270c35e9ed1d WatchSource:0}: Error finding container fc7846f26c3abfe742e05f4276f9a2345f8bba4af9888c559b78270c35e9ed1d: Status 404 returned error can't find the container with id fc7846f26c3abfe742e05f4276f9a2345f8bba4af9888c559b78270c35e9ed1d Jan 10 07:12:29 crc kubenswrapper[4810]: I0110 07:12:29.703770 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd9db040-7192-4fdb-951f-07e206f32728" path="/var/lib/kubelet/pods/dd9db040-7192-4fdb-951f-07e206f32728/volumes" Jan 10 07:12:29 crc kubenswrapper[4810]: I0110 07:12:29.824235 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" event={"ID":"b7130a15-458b-41ff-9d3b-931ac0890ed9","Type":"ContainerStarted","Data":"fc7846f26c3abfe742e05f4276f9a2345f8bba4af9888c559b78270c35e9ed1d"} Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.768098 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.834969 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" event={"ID":"b7130a15-458b-41ff-9d3b-931ac0890ed9","Type":"ContainerStarted","Data":"ef91f5cd09e625b653f15d0ad1bd620293a058cc6b1f41e3c7658b45bb76afc8"} Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.837543 4810 generic.go:334] "Generic (PLEG): container finished" podID="c53844ae-14bb-4c0b-808e-3a198ad77d3c" containerID="6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f" exitCode=0 Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.837580 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" event={"ID":"c53844ae-14bb-4c0b-808e-3a198ad77d3c","Type":"ContainerDied","Data":"6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f"} Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.837623 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" event={"ID":"c53844ae-14bb-4c0b-808e-3a198ad77d3c","Type":"ContainerDied","Data":"a2d5137c72221fc844222ad988a1d7f4207fb4da547f25fa566cb86b4d5040fc"} Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.837639 4810 scope.go:117] "RemoveContainer" containerID="6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.837564 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.841964 4810 generic.go:334] "Generic (PLEG): container finished" podID="9ae7f276-37e7-4ef5-831a-4c8b911bfc84" containerID="f5185552ac3bc1bd54632d70763e412461cc956133d4f5ecf1c39e38b6ad3981" exitCode=0 Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.842036 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" event={"ID":"9ae7f276-37e7-4ef5-831a-4c8b911bfc84","Type":"ContainerDied","Data":"f5185552ac3bc1bd54632d70763e412461cc956133d4f5ecf1c39e38b6ad3981"} Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.863572 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" podStartSLOduration=2.863545669 podStartE2EDuration="2.863545669s" podCreationTimestamp="2026-01-10 07:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:12:30.857422882 +0000 UTC m=+1579.472915775" watchObservedRunningTime="2026-01-10 07:12:30.863545669 +0000 UTC m=+1579.479038592" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.908389 4810 scope.go:117] "RemoveContainer" containerID="8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.926022 4810 scope.go:117] "RemoveContainer" containerID="6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f" Jan 10 07:12:30 crc kubenswrapper[4810]: E0110 07:12:30.926482 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f\": container with ID starting with 6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f not found: ID does not exist" containerID="6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.926518 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f"} err="failed to get container status \"6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f\": rpc error: code = NotFound desc = could not find container \"6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f\": container with ID starting with 6d9fab0247e123215fc3bef4edf0bc2ec4b961752acebed1090a11f61e1a602f not found: ID does not exist" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.926544 4810 scope.go:117] "RemoveContainer" containerID="8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636" Jan 10 07:12:30 crc kubenswrapper[4810]: E0110 07:12:30.926850 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636\": container with ID starting with 8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636 not found: ID does not exist" containerID="8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.926888 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636"} err="failed to get container status \"8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636\": rpc error: code = NotFound desc = could not find container \"8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636\": container with ID starting with 8e0033c15227b4d7045de1f476c6557d8e254b22361b03603aae6d056dabd636 not found: ID does not exist" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.929425 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data-custom\") pod \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.929466 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfjh7\" (UniqueName: \"kubernetes.io/projected/c53844ae-14bb-4c0b-808e-3a198ad77d3c-kube-api-access-zfjh7\") pod \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.929533 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53844ae-14bb-4c0b-808e-3a198ad77d3c-logs\") pod \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.929566 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data\") pod \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\" (UID: \"c53844ae-14bb-4c0b-808e-3a198ad77d3c\") " Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.930387 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c53844ae-14bb-4c0b-808e-3a198ad77d3c-logs" (OuterVolumeSpecName: "logs") pod "c53844ae-14bb-4c0b-808e-3a198ad77d3c" (UID: "c53844ae-14bb-4c0b-808e-3a198ad77d3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.935071 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c53844ae-14bb-4c0b-808e-3a198ad77d3c" (UID: "c53844ae-14bb-4c0b-808e-3a198ad77d3c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.938181 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c53844ae-14bb-4c0b-808e-3a198ad77d3c-kube-api-access-zfjh7" (OuterVolumeSpecName: "kube-api-access-zfjh7") pod "c53844ae-14bb-4c0b-808e-3a198ad77d3c" (UID: "c53844ae-14bb-4c0b-808e-3a198ad77d3c"). InnerVolumeSpecName "kube-api-access-zfjh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:30 crc kubenswrapper[4810]: I0110 07:12:30.975792 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data" (OuterVolumeSpecName: "config-data") pod "c53844ae-14bb-4c0b-808e-3a198ad77d3c" (UID: "c53844ae-14bb-4c0b-808e-3a198ad77d3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.033184 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c53844ae-14bb-4c0b-808e-3a198ad77d3c-logs\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.033521 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.033643 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c53844ae-14bb-4c0b-808e-3a198ad77d3c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.033796 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfjh7\" (UniqueName: \"kubernetes.io/projected/c53844ae-14bb-4c0b-808e-3a198ad77d3c-kube-api-access-zfjh7\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.167457 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx"] Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.173046 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-keystone-listener-5579f95b8d-5grzx"] Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.478737 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" podUID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.95:9311/healthcheck\": dial tcp 10.217.0.95:9311: connect: connection refused" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.478758 4810 prober.go:107] "Probe failed" probeType="Readiness" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" podUID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.95:9311/healthcheck\": dial tcp 10.217.0.95:9311: connect: connection refused" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.479294 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.642531 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqkkr\" (UniqueName: \"kubernetes.io/projected/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-kube-api-access-mqkkr\") pod \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.642641 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data-custom\") pod \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.642726 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-logs\") pod \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.642756 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data\") pod \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\" (UID: \"9ae7f276-37e7-4ef5-831a-4c8b911bfc84\") " Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.643206 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-logs" (OuterVolumeSpecName: "logs") pod "9ae7f276-37e7-4ef5-831a-4c8b911bfc84" (UID: "9ae7f276-37e7-4ef5-831a-4c8b911bfc84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.649480 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9ae7f276-37e7-4ef5-831a-4c8b911bfc84" (UID: "9ae7f276-37e7-4ef5-831a-4c8b911bfc84"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.649572 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-kube-api-access-mqkkr" (OuterVolumeSpecName: "kube-api-access-mqkkr") pod "9ae7f276-37e7-4ef5-831a-4c8b911bfc84" (UID: "9ae7f276-37e7-4ef5-831a-4c8b911bfc84"). InnerVolumeSpecName "kube-api-access-mqkkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.683320 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data" (OuterVolumeSpecName: "config-data") pod "9ae7f276-37e7-4ef5-831a-4c8b911bfc84" (UID: "9ae7f276-37e7-4ef5-831a-4c8b911bfc84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.702600 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c53844ae-14bb-4c0b-808e-3a198ad77d3c" path="/var/lib/kubelet/pods/c53844ae-14bb-4c0b-808e-3a198ad77d3c/volumes" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.744134 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqkkr\" (UniqueName: \"kubernetes.io/projected/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-kube-api-access-mqkkr\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.744164 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.744173 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-logs\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.744183 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae7f276-37e7-4ef5-831a-4c8b911bfc84-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.851822 4810 generic.go:334] "Generic (PLEG): container finished" podID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerID="341a4f04aff29fad2fb9ca7620978f8049217d34ad6b6a718b0d5e6470e17d91" exitCode=0 Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.851944 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" event={"ID":"70f8d132-32ca-40ae-a116-0684c284eb4e","Type":"ContainerDied","Data":"341a4f04aff29fad2fb9ca7620978f8049217d34ad6b6a718b0d5e6470e17d91"} Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.855639 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" event={"ID":"9ae7f276-37e7-4ef5-831a-4c8b911bfc84","Type":"ContainerDied","Data":"9d8318dc82d6fb25f7d4747e0bafe08afdc9d78f64037163e167ac894216356c"} Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.855688 4810 scope.go:117] "RemoveContainer" containerID="f5185552ac3bc1bd54632d70763e412461cc956133d4f5ecf1c39e38b6ad3981" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.855731 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.878764 4810 scope.go:117] "RemoveContainer" containerID="f5149bac691ab2cf15c4be9553e38e23aa809552c6038100850b95503d844ceb" Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.882900 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw"] Jan 10 07:12:31 crc kubenswrapper[4810]: I0110 07:12:31.890393 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-worker-799d94b4df-cbsxw"] Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.421020 4810 scope.go:117] "RemoveContainer" containerID="f251f934da984a83daa95f1d7ebf48df29f34a0091743c4bd141c24b21008a3e" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.445304 4810 scope.go:117] "RemoveContainer" containerID="afe7d4c70a56c2bfed15eeca41c87d18bf47895f7ef6e1303daa21c5e17afc5b" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.488871 4810 scope.go:117] "RemoveContainer" containerID="5a0c21d6bf258ac463f0ecd86f456a01ed37a6415e99ee7aab1e88d2796c087f" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.555329 4810 scope.go:117] "RemoveContainer" containerID="4213906ef0291c4466dfcbb3e3445a71e0f29c4f871c44b75a4c5729c59c19bc" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.569635 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.579439 4810 scope.go:117] "RemoveContainer" containerID="341a4f04aff29fad2fb9ca7620978f8049217d34ad6b6a718b0d5e6470e17d91" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.579700 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f8d132-32ca-40ae-a116-0684c284eb4e-logs\") pod \"70f8d132-32ca-40ae-a116-0684c284eb4e\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.579798 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data\") pod \"70f8d132-32ca-40ae-a116-0684c284eb4e\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.579891 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data-custom\") pod \"70f8d132-32ca-40ae-a116-0684c284eb4e\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.579925 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr994\" (UniqueName: \"kubernetes.io/projected/70f8d132-32ca-40ae-a116-0684c284eb4e-kube-api-access-zr994\") pod \"70f8d132-32ca-40ae-a116-0684c284eb4e\" (UID: \"70f8d132-32ca-40ae-a116-0684c284eb4e\") " Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.580312 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70f8d132-32ca-40ae-a116-0684c284eb4e-logs" (OuterVolumeSpecName: "logs") pod "70f8d132-32ca-40ae-a116-0684c284eb4e" (UID: "70f8d132-32ca-40ae-a116-0684c284eb4e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.584513 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "70f8d132-32ca-40ae-a116-0684c284eb4e" (UID: "70f8d132-32ca-40ae-a116-0684c284eb4e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.586341 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f8d132-32ca-40ae-a116-0684c284eb4e-kube-api-access-zr994" (OuterVolumeSpecName: "kube-api-access-zr994") pod "70f8d132-32ca-40ae-a116-0684c284eb4e" (UID: "70f8d132-32ca-40ae-a116-0684c284eb4e"). InnerVolumeSpecName "kube-api-access-zr994". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.618382 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data" (OuterVolumeSpecName: "config-data") pod "70f8d132-32ca-40ae-a116-0684c284eb4e" (UID: "70f8d132-32ca-40ae-a116-0684c284eb4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.645329 4810 scope.go:117] "RemoveContainer" containerID="c2be034fc72f171ac8b87be0c14eac435525efdd4eeb007c49e180965967ab71" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.658408 4810 scope.go:117] "RemoveContainer" containerID="b1c2f09df1781147b6b8ccf4a85b36113f6cbad2bff102b1f466ae4e62836d2b" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.681756 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.681788 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr994\" (UniqueName: \"kubernetes.io/projected/70f8d132-32ca-40ae-a116-0684c284eb4e-kube-api-access-zr994\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.681799 4810 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f8d132-32ca-40ae-a116-0684c284eb4e-logs\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.681808 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f8d132-32ca-40ae-a116-0684c284eb4e-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.863900 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" event={"ID":"70f8d132-32ca-40ae-a116-0684c284eb4e","Type":"ContainerDied","Data":"05ad3520d7b263d7d7b730f6f785b8d41ff8e8206a3c2998a4d3ebb56da17822"} Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.865750 4810 generic.go:334] "Generic (PLEG): container finished" podID="b7130a15-458b-41ff-9d3b-931ac0890ed9" containerID="ef91f5cd09e625b653f15d0ad1bd620293a058cc6b1f41e3c7658b45bb76afc8" exitCode=0 Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.865816 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican-api-867d7c779b-gb8r6" Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.865864 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" event={"ID":"b7130a15-458b-41ff-9d3b-931ac0890ed9","Type":"ContainerDied","Data":"ef91f5cd09e625b653f15d0ad1bd620293a058cc6b1f41e3c7658b45bb76afc8"} Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.923020 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-api-867d7c779b-gb8r6"] Jan 10 07:12:32 crc kubenswrapper[4810]: I0110 07:12:32.930683 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-api-867d7c779b-gb8r6"] Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.473679 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-894vg"] Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.479569 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-mmrvb"] Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.499248 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-sync-894vg"] Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.512321 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-5466b78d84-wrtxv"] Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.512574 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" podUID="01b7ca68-697a-4b7a-8c56-73b4a6fc7283" containerName="keystone-api" containerID="cri-o://98290f3347a44234bcf9800c5d3cc6af1020f0d56a2a4922cc3110b02f8ca929" gracePeriod=30 Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.520864 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-bootstrap-mmrvb"] Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549240 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/keystonec4b1-account-delete-nf6x5"] Jan 10 07:12:33 crc kubenswrapper[4810]: E0110 07:12:33.549507 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53844ae-14bb-4c0b-808e-3a198ad77d3c" containerName="barbican-keystone-listener" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549523 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53844ae-14bb-4c0b-808e-3a198ad77d3c" containerName="barbican-keystone-listener" Jan 10 07:12:33 crc kubenswrapper[4810]: E0110 07:12:33.549545 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c53844ae-14bb-4c0b-808e-3a198ad77d3c" containerName="barbican-keystone-listener-log" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549552 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c53844ae-14bb-4c0b-808e-3a198ad77d3c" containerName="barbican-keystone-listener-log" Jan 10 07:12:33 crc kubenswrapper[4810]: E0110 07:12:33.549565 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerName="barbican-api" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549572 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerName="barbican-api" Jan 10 07:12:33 crc kubenswrapper[4810]: E0110 07:12:33.549583 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae7f276-37e7-4ef5-831a-4c8b911bfc84" containerName="barbican-worker-log" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549590 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae7f276-37e7-4ef5-831a-4c8b911bfc84" containerName="barbican-worker-log" Jan 10 07:12:33 crc kubenswrapper[4810]: E0110 07:12:33.549601 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae7f276-37e7-4ef5-831a-4c8b911bfc84" containerName="barbican-worker" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549607 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae7f276-37e7-4ef5-831a-4c8b911bfc84" containerName="barbican-worker" Jan 10 07:12:33 crc kubenswrapper[4810]: E0110 07:12:33.549620 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerName="barbican-api-log" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549627 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerName="barbican-api-log" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549751 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerName="barbican-api-log" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549762 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53844ae-14bb-4c0b-808e-3a198ad77d3c" containerName="barbican-keystone-listener" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549774 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae7f276-37e7-4ef5-831a-4c8b911bfc84" containerName="barbican-worker" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549785 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f8d132-32ca-40ae-a116-0684c284eb4e" containerName="barbican-api" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549794 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae7f276-37e7-4ef5-831a-4c8b911bfc84" containerName="barbican-worker-log" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.549802 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c53844ae-14bb-4c0b-808e-3a198ad77d3c" containerName="barbican-keystone-listener-log" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.550843 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.560883 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystonec4b1-account-delete-nf6x5"] Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.593559 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wgn8\" (UniqueName: \"kubernetes.io/projected/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-kube-api-access-8wgn8\") pod \"keystonec4b1-account-delete-nf6x5\" (UID: \"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e\") " pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.593675 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-operator-scripts\") pod \"keystonec4b1-account-delete-nf6x5\" (UID: \"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e\") " pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.695038 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-operator-scripts\") pod \"keystonec4b1-account-delete-nf6x5\" (UID: \"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e\") " pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.695149 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wgn8\" (UniqueName: \"kubernetes.io/projected/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-kube-api-access-8wgn8\") pod \"keystonec4b1-account-delete-nf6x5\" (UID: \"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e\") " pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.695927 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-operator-scripts\") pod \"keystonec4b1-account-delete-nf6x5\" (UID: \"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e\") " pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.701661 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f325c4-37ce-4a2a-910e-817636cc1dc1" path="/var/lib/kubelet/pods/09f325c4-37ce-4a2a-910e-817636cc1dc1/volumes" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.702348 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f8d132-32ca-40ae-a116-0684c284eb4e" path="/var/lib/kubelet/pods/70f8d132-32ca-40ae-a116-0684c284eb4e/volumes" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.702993 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87eafc83-9ef5-4a79-8f79-e6386ba86071" path="/var/lib/kubelet/pods/87eafc83-9ef5-4a79-8f79-e6386ba86071/volumes" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.704123 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae7f276-37e7-4ef5-831a-4c8b911bfc84" path="/var/lib/kubelet/pods/9ae7f276-37e7-4ef5-831a-4c8b911bfc84/volumes" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.729876 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wgn8\" (UniqueName: \"kubernetes.io/projected/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-kube-api-access-8wgn8\") pod \"keystonec4b1-account-delete-nf6x5\" (UID: \"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e\") " pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" Jan 10 07:12:33 crc kubenswrapper[4810]: I0110 07:12:33.864673 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.143516 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.143835 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.197006 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.199382 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.301002 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/keystonec4b1-account-delete-nf6x5"] Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.302886 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7130a15-458b-41ff-9d3b-931ac0890ed9-operator-scripts\") pod \"b7130a15-458b-41ff-9d3b-931ac0890ed9\" (UID: \"b7130a15-458b-41ff-9d3b-931ac0890ed9\") " Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.302935 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb9rp\" (UniqueName: \"kubernetes.io/projected/b7130a15-458b-41ff-9d3b-931ac0890ed9-kube-api-access-kb9rp\") pod \"b7130a15-458b-41ff-9d3b-931ac0890ed9\" (UID: \"b7130a15-458b-41ff-9d3b-931ac0890ed9\") " Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.303441 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7130a15-458b-41ff-9d3b-931ac0890ed9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7130a15-458b-41ff-9d3b-931ac0890ed9" (UID: "b7130a15-458b-41ff-9d3b-931ac0890ed9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.308256 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7130a15-458b-41ff-9d3b-931ac0890ed9-kube-api-access-kb9rp" (OuterVolumeSpecName: "kube-api-access-kb9rp") pod "b7130a15-458b-41ff-9d3b-931ac0890ed9" (UID: "b7130a15-458b-41ff-9d3b-931ac0890ed9"). InnerVolumeSpecName "kube-api-access-kb9rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.404828 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7130a15-458b-41ff-9d3b-931ac0890ed9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.404856 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb9rp\" (UniqueName: \"kubernetes.io/projected/b7130a15-458b-41ff-9d3b-931ac0890ed9-kube-api-access-kb9rp\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.885888 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" event={"ID":"b7130a15-458b-41ff-9d3b-931ac0890ed9","Type":"ContainerDied","Data":"fc7846f26c3abfe742e05f4276f9a2345f8bba4af9888c559b78270c35e9ed1d"} Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.885921 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/barbican2aa3-account-delete-kwwwp" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.885938 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc7846f26c3abfe742e05f4276f9a2345f8bba4af9888c559b78270c35e9ed1d" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.887490 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" event={"ID":"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e","Type":"ContainerStarted","Data":"55f11bcb141c37db068248bb010095bd67ea3c2f09cacd84260227af552d7be3"} Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.887523 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" event={"ID":"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e","Type":"ContainerStarted","Data":"56b12ea187c6e96f320c54e520cba7b63a69bc5e92077bf36e6c4b062d0e35ff"} Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.905877 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" podStartSLOduration=1.905858949 podStartE2EDuration="1.905858949s" podCreationTimestamp="2026-01-10 07:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:12:34.900165093 +0000 UTC m=+1583.515657976" watchObservedRunningTime="2026-01-10 07:12:34.905858949 +0000 UTC m=+1583.521351832" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.942649 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:34 crc kubenswrapper[4810]: I0110 07:12:34.992975 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj95q"] Jan 10 07:12:36 crc kubenswrapper[4810]: I0110 07:12:36.902670 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lj95q" podUID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" containerName="registry-server" containerID="cri-o://b7448c5b5d1b0a3d3b0fe924f7c76171931ba85e061fe3903d549f0162e57615" gracePeriod=2 Jan 10 07:12:37 crc kubenswrapper[4810]: I0110 07:12:37.912773 4810 generic.go:334] "Generic (PLEG): container finished" podID="01b7ca68-697a-4b7a-8c56-73b4a6fc7283" containerID="98290f3347a44234bcf9800c5d3cc6af1020f0d56a2a4922cc3110b02f8ca929" exitCode=0 Jan 10 07:12:37 crc kubenswrapper[4810]: I0110 07:12:37.912828 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" event={"ID":"01b7ca68-697a-4b7a-8c56-73b4a6fc7283","Type":"ContainerDied","Data":"98290f3347a44234bcf9800c5d3cc6af1020f0d56a2a4922cc3110b02f8ca929"} Jan 10 07:12:38 crc kubenswrapper[4810]: I0110 07:12:38.144334 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-db-create-cpptw"] Jan 10 07:12:38 crc kubenswrapper[4810]: I0110 07:12:38.149389 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-db-create-cpptw"] Jan 10 07:12:38 crc kubenswrapper[4810]: I0110 07:12:38.153614 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican2aa3-account-delete-kwwwp"] Jan 10 07:12:38 crc kubenswrapper[4810]: I0110 07:12:38.158007 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/barbican-2aa3-account-create-update-krblm"] Jan 10 07:12:38 crc kubenswrapper[4810]: I0110 07:12:38.162656 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican2aa3-account-delete-kwwwp"] Jan 10 07:12:38 crc kubenswrapper[4810]: I0110 07:12:38.167070 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/barbican-2aa3-account-create-update-krblm"] Jan 10 07:12:38 crc kubenswrapper[4810]: I0110 07:12:38.924168 4810 generic.go:334] "Generic (PLEG): container finished" podID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" containerID="b7448c5b5d1b0a3d3b0fe924f7c76171931ba85e061fe3903d549f0162e57615" exitCode=0 Jan 10 07:12:38 crc kubenswrapper[4810]: I0110 07:12:38.924292 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj95q" event={"ID":"937bd81b-b395-4267-a2b6-b82c5a90f3d7","Type":"ContainerDied","Data":"b7448c5b5d1b0a3d3b0fe924f7c76171931ba85e061fe3903d549f0162e57615"} Jan 10 07:12:38 crc kubenswrapper[4810]: I0110 07:12:38.926564 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" event={"ID":"01b7ca68-697a-4b7a-8c56-73b4a6fc7283","Type":"ContainerDied","Data":"05d8333240c2ab7ba456034c69ed31a99423aabbbb1f3f7a80f4e4c90a70128d"} Jan 10 07:12:38 crc kubenswrapper[4810]: I0110 07:12:38.926588 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05d8333240c2ab7ba456034c69ed31a99423aabbbb1f3f7a80f4e4c90a70128d" Jan 10 07:12:38 crc kubenswrapper[4810]: I0110 07:12:38.953496 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.066160 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-config-data\") pod \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.066253 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-scripts\") pod \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.066568 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw9bk\" (UniqueName: \"kubernetes.io/projected/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-kube-api-access-nw9bk\") pod \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.066645 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-fernet-keys\") pod \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.066675 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-credential-keys\") pod \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\" (UID: \"01b7ca68-697a-4b7a-8c56-73b4a6fc7283\") " Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.071988 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-scripts" (OuterVolumeSpecName: "scripts") pod "01b7ca68-697a-4b7a-8c56-73b4a6fc7283" (UID: "01b7ca68-697a-4b7a-8c56-73b4a6fc7283"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.072445 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "01b7ca68-697a-4b7a-8c56-73b4a6fc7283" (UID: "01b7ca68-697a-4b7a-8c56-73b4a6fc7283"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.072814 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-kube-api-access-nw9bk" (OuterVolumeSpecName: "kube-api-access-nw9bk") pod "01b7ca68-697a-4b7a-8c56-73b4a6fc7283" (UID: "01b7ca68-697a-4b7a-8c56-73b4a6fc7283"). InnerVolumeSpecName "kube-api-access-nw9bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.074025 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "01b7ca68-697a-4b7a-8c56-73b4a6fc7283" (UID: "01b7ca68-697a-4b7a-8c56-73b4a6fc7283"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.094568 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-config-data" (OuterVolumeSpecName: "config-data") pod "01b7ca68-697a-4b7a-8c56-73b4a6fc7283" (UID: "01b7ca68-697a-4b7a-8c56-73b4a6fc7283"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.139521 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.168347 4810 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.168384 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw9bk\" (UniqueName: \"kubernetes.io/projected/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-kube-api-access-nw9bk\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.168396 4810 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.168405 4810 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.168413 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01b7ca68-697a-4b7a-8c56-73b4a6fc7283-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.269508 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-utilities\") pod \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.269648 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfqc2\" (UniqueName: \"kubernetes.io/projected/937bd81b-b395-4267-a2b6-b82c5a90f3d7-kube-api-access-gfqc2\") pod \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.269727 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-catalog-content\") pod \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\" (UID: \"937bd81b-b395-4267-a2b6-b82c5a90f3d7\") " Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.270411 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-utilities" (OuterVolumeSpecName: "utilities") pod "937bd81b-b395-4267-a2b6-b82c5a90f3d7" (UID: "937bd81b-b395-4267-a2b6-b82c5a90f3d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.273269 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937bd81b-b395-4267-a2b6-b82c5a90f3d7-kube-api-access-gfqc2" (OuterVolumeSpecName: "kube-api-access-gfqc2") pod "937bd81b-b395-4267-a2b6-b82c5a90f3d7" (UID: "937bd81b-b395-4267-a2b6-b82c5a90f3d7"). InnerVolumeSpecName "kube-api-access-gfqc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.298952 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "937bd81b-b395-4267-a2b6-b82c5a90f3d7" (UID: "937bd81b-b395-4267-a2b6-b82c5a90f3d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.371152 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.371191 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfqc2\" (UniqueName: \"kubernetes.io/projected/937bd81b-b395-4267-a2b6-b82c5a90f3d7-kube-api-access-gfqc2\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.371212 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/937bd81b-b395-4267-a2b6-b82c5a90f3d7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.700765 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c92036-8ef9-4631-85c1-a11bc9f9829b" path="/var/lib/kubelet/pods/b6c92036-8ef9-4631-85c1-a11bc9f9829b/volumes" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.701292 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7130a15-458b-41ff-9d3b-931ac0890ed9" path="/var/lib/kubelet/pods/b7130a15-458b-41ff-9d3b-931ac0890ed9/volumes" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.701774 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d200a34b-dbff-467c-aa3f-8fed2875175c" path="/var/lib/kubelet/pods/d200a34b-dbff-467c-aa3f-8fed2875175c/volumes" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.944835 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystone-5466b78d84-wrtxv" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.945993 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj95q" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.946013 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj95q" event={"ID":"937bd81b-b395-4267-a2b6-b82c5a90f3d7","Type":"ContainerDied","Data":"a6b0d571c087f11b946199f5ece1d74337818484bef1ec21c61da660fab83da7"} Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.946268 4810 scope.go:117] "RemoveContainer" containerID="b7448c5b5d1b0a3d3b0fe924f7c76171931ba85e061fe3903d549f0162e57615" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.969097 4810 scope.go:117] "RemoveContainer" containerID="f32ad3b6685e4af8199eb7f218cd2e9fecdfe079442323a90702978c3a3699c3" Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.971335 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj95q"] Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.977568 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj95q"] Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.982308 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-5466b78d84-wrtxv"] Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.990281 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-5466b78d84-wrtxv"] Jan 10 07:12:39 crc kubenswrapper[4810]: I0110 07:12:39.990447 4810 scope.go:117] "RemoveContainer" containerID="bfacc0f29beb4ea98306e445d6cb3ccf037298faa2f95e19f24e8e432ddd9196" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.452405 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["swift-kuttl-tests/root-account-create-update-446zj"] Jan 10 07:12:40 crc kubenswrapper[4810]: E0110 07:12:40.452775 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" containerName="extract-utilities" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.452796 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" containerName="extract-utilities" Jan 10 07:12:40 crc kubenswrapper[4810]: E0110 07:12:40.452810 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7130a15-458b-41ff-9d3b-931ac0890ed9" containerName="mariadb-account-delete" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.452819 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7130a15-458b-41ff-9d3b-931ac0890ed9" containerName="mariadb-account-delete" Jan 10 07:12:40 crc kubenswrapper[4810]: E0110 07:12:40.452840 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" containerName="extract-content" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.452849 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" containerName="extract-content" Jan 10 07:12:40 crc kubenswrapper[4810]: E0110 07:12:40.452866 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b7ca68-697a-4b7a-8c56-73b4a6fc7283" containerName="keystone-api" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.452876 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b7ca68-697a-4b7a-8c56-73b4a6fc7283" containerName="keystone-api" Jan 10 07:12:40 crc kubenswrapper[4810]: E0110 07:12:40.452895 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" containerName="registry-server" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.452904 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" containerName="registry-server" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.453086 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b7ca68-697a-4b7a-8c56-73b4a6fc7283" containerName="keystone-api" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.453106 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" containerName="registry-server" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.453127 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7130a15-458b-41ff-9d3b-931ac0890ed9" containerName="mariadb-account-delete" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.453748 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-446zj" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.456540 4810 reflector.go:368] Caches populated for *v1.Secret from object-"swift-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.468460 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/root-account-create-update-446zj"] Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.507986 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.530551 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.543622 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.580649 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-446zj"] Jan 10 07:12:40 crc kubenswrapper[4810]: E0110 07:12:40.581645 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-tdjjh operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="swift-kuttl-tests/root-account-create-update-446zj" podUID="abb6876d-4e9f-48b8-8613-ef077e422be1" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.588952 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdjjh\" (UniqueName: \"kubernetes.io/projected/abb6876d-4e9f-48b8-8613-ef077e422be1-kube-api-access-tdjjh\") pod \"root-account-create-update-446zj\" (UID: \"abb6876d-4e9f-48b8-8613-ef077e422be1\") " pod="swift-kuttl-tests/root-account-create-update-446zj" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.589023 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abb6876d-4e9f-48b8-8613-ef077e422be1-operator-scripts\") pod \"root-account-create-update-446zj\" (UID: \"abb6876d-4e9f-48b8-8613-ef077e422be1\") " pod="swift-kuttl-tests/root-account-create-update-446zj" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.674984 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/openstack-galera-2" podUID="cb0387ec-3523-41da-b323-7249eb242b4d" containerName="galera" containerID="cri-o://67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262" gracePeriod=30 Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.691282 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdjjh\" (UniqueName: \"kubernetes.io/projected/abb6876d-4e9f-48b8-8613-ef077e422be1-kube-api-access-tdjjh\") pod \"root-account-create-update-446zj\" (UID: \"abb6876d-4e9f-48b8-8613-ef077e422be1\") " pod="swift-kuttl-tests/root-account-create-update-446zj" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.691338 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abb6876d-4e9f-48b8-8613-ef077e422be1-operator-scripts\") pod \"root-account-create-update-446zj\" (UID: \"abb6876d-4e9f-48b8-8613-ef077e422be1\") " pod="swift-kuttl-tests/root-account-create-update-446zj" Jan 10 07:12:40 crc kubenswrapper[4810]: E0110 07:12:40.691443 4810 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 10 07:12:40 crc kubenswrapper[4810]: E0110 07:12:40.691552 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abb6876d-4e9f-48b8-8613-ef077e422be1-operator-scripts podName:abb6876d-4e9f-48b8-8613-ef077e422be1 nodeName:}" failed. No retries permitted until 2026-01-10 07:12:41.19149334 +0000 UTC m=+1589.806986223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/abb6876d-4e9f-48b8-8613-ef077e422be1-operator-scripts") pod "root-account-create-update-446zj" (UID: "abb6876d-4e9f-48b8-8613-ef077e422be1") : configmap "openstack-scripts" not found Jan 10 07:12:40 crc kubenswrapper[4810]: E0110 07:12:40.696886 4810 projected.go:194] Error preparing data for projected volume kube-api-access-tdjjh for pod swift-kuttl-tests/root-account-create-update-446zj: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 10 07:12:40 crc kubenswrapper[4810]: E0110 07:12:40.696966 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abb6876d-4e9f-48b8-8613-ef077e422be1-kube-api-access-tdjjh podName:abb6876d-4e9f-48b8-8613-ef077e422be1 nodeName:}" failed. No retries permitted until 2026-01-10 07:12:41.19694344 +0000 UTC m=+1589.812436333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tdjjh" (UniqueName: "kubernetes.io/projected/abb6876d-4e9f-48b8-8613-ef077e422be1-kube-api-access-tdjjh") pod "root-account-create-update-446zj" (UID: "abb6876d-4e9f-48b8-8613-ef077e422be1") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.956356 4810 generic.go:334] "Generic (PLEG): container finished" podID="4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e" containerID="55f11bcb141c37db068248bb010095bd67ea3c2f09cacd84260227af552d7be3" exitCode=0 Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.956471 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-446zj" Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.956495 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" event={"ID":"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e","Type":"ContainerDied","Data":"55f11bcb141c37db068248bb010095bd67ea3c2f09cacd84260227af552d7be3"} Jan 10 07:12:40 crc kubenswrapper[4810]: I0110 07:12:40.965864 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-446zj" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.147103 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.147366 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/memcached-0" podUID="0b41eafc-06e2-4d4c-8e1c-20220c76be9f" containerName="memcached" containerID="cri-o://c11918850f7f4d79b51c135bff1cc2801d4bd89a1614e8409dacafb464114839" gracePeriod=30 Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.198636 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdjjh\" (UniqueName: \"kubernetes.io/projected/abb6876d-4e9f-48b8-8613-ef077e422be1-kube-api-access-tdjjh\") pod \"root-account-create-update-446zj\" (UID: \"abb6876d-4e9f-48b8-8613-ef077e422be1\") " pod="swift-kuttl-tests/root-account-create-update-446zj" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.198688 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abb6876d-4e9f-48b8-8613-ef077e422be1-operator-scripts\") pod \"root-account-create-update-446zj\" (UID: \"abb6876d-4e9f-48b8-8613-ef077e422be1\") " pod="swift-kuttl-tests/root-account-create-update-446zj" Jan 10 07:12:41 crc kubenswrapper[4810]: E0110 07:12:41.198860 4810 configmap.go:193] Couldn't get configMap swift-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 10 07:12:41 crc kubenswrapper[4810]: E0110 07:12:41.198924 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abb6876d-4e9f-48b8-8613-ef077e422be1-operator-scripts podName:abb6876d-4e9f-48b8-8613-ef077e422be1 nodeName:}" failed. No retries permitted until 2026-01-10 07:12:42.198905498 +0000 UTC m=+1590.814398381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/abb6876d-4e9f-48b8-8613-ef077e422be1-operator-scripts") pod "root-account-create-update-446zj" (UID: "abb6876d-4e9f-48b8-8613-ef077e422be1") : configmap "openstack-scripts" not found Jan 10 07:12:41 crc kubenswrapper[4810]: E0110 07:12:41.201746 4810 projected.go:194] Error preparing data for projected volume kube-api-access-tdjjh for pod swift-kuttl-tests/root-account-create-update-446zj: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 10 07:12:41 crc kubenswrapper[4810]: E0110 07:12:41.201843 4810 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/abb6876d-4e9f-48b8-8613-ef077e422be1-kube-api-access-tdjjh podName:abb6876d-4e9f-48b8-8613-ef077e422be1 nodeName:}" failed. No retries permitted until 2026-01-10 07:12:42.201820417 +0000 UTC m=+1590.817313350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tdjjh" (UniqueName: "kubernetes.io/projected/abb6876d-4e9f-48b8-8613-ef077e422be1-kube-api-access-tdjjh") pod "root-account-create-update-446zj" (UID: "abb6876d-4e9f-48b8-8613-ef077e422be1") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.630764 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.707370 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b7ca68-697a-4b7a-8c56-73b4a6fc7283" path="/var/lib/kubelet/pods/01b7ca68-697a-4b7a-8c56-73b4a6fc7283/volumes" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.707989 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="937bd81b-b395-4267-a2b6-b82c5a90f3d7" path="/var/lib/kubelet/pods/937bd81b-b395-4267-a2b6-b82c5a90f3d7/volumes" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.717924 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.780751 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.813111 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-default\") pod \"cb0387ec-3523-41da-b323-7249eb242b4d\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.813272 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-operator-scripts\") pod \"cb0387ec-3523-41da-b323-7249eb242b4d\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.813418 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-kolla-config\") pod \"cb0387ec-3523-41da-b323-7249eb242b4d\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.813503 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-generated\") pod \"cb0387ec-3523-41da-b323-7249eb242b4d\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.813586 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cb0387ec-3523-41da-b323-7249eb242b4d\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.813658 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s286v\" (UniqueName: \"kubernetes.io/projected/cb0387ec-3523-41da-b323-7249eb242b4d-kube-api-access-s286v\") pod \"cb0387ec-3523-41da-b323-7249eb242b4d\" (UID: \"cb0387ec-3523-41da-b323-7249eb242b4d\") " Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.813878 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "cb0387ec-3523-41da-b323-7249eb242b4d" (UID: "cb0387ec-3523-41da-b323-7249eb242b4d"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.814300 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.814371 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "cb0387ec-3523-41da-b323-7249eb242b4d" (UID: "cb0387ec-3523-41da-b323-7249eb242b4d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.814964 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb0387ec-3523-41da-b323-7249eb242b4d" (UID: "cb0387ec-3523-41da-b323-7249eb242b4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.815482 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "cb0387ec-3523-41da-b323-7249eb242b4d" (UID: "cb0387ec-3523-41da-b323-7249eb242b4d"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.829010 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb0387ec-3523-41da-b323-7249eb242b4d-kube-api-access-s286v" (OuterVolumeSpecName: "kube-api-access-s286v") pod "cb0387ec-3523-41da-b323-7249eb242b4d" (UID: "cb0387ec-3523-41da-b323-7249eb242b4d"). InnerVolumeSpecName "kube-api-access-s286v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.831553 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "mysql-db") pod "cb0387ec-3523-41da-b323-7249eb242b4d" (UID: "cb0387ec-3523-41da-b323-7249eb242b4d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.916109 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.916141 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s286v\" (UniqueName: \"kubernetes.io/projected/cb0387ec-3523-41da-b323-7249eb242b4d-kube-api-access-s286v\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.916153 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.916161 4810 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cb0387ec-3523-41da-b323-7249eb242b4d-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.916170 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cb0387ec-3523-41da-b323-7249eb242b4d-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.929962 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.967633 4810 generic.go:334] "Generic (PLEG): container finished" podID="cb0387ec-3523-41da-b323-7249eb242b4d" containerID="67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262" exitCode=0 Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.967699 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-2" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.967741 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cb0387ec-3523-41da-b323-7249eb242b4d","Type":"ContainerDied","Data":"67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262"} Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.967830 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-2" event={"ID":"cb0387ec-3523-41da-b323-7249eb242b4d","Type":"ContainerDied","Data":"d6a4940484c8a0697ec180dac285bb1ecfa19df0e9ea8b0a48d20144678c8c48"} Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.967850 4810 scope.go:117] "RemoveContainer" containerID="67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.969805 4810 generic.go:334] "Generic (PLEG): container finished" podID="0b41eafc-06e2-4d4c-8e1c-20220c76be9f" containerID="c11918850f7f4d79b51c135bff1cc2801d4bd89a1614e8409dacafb464114839" exitCode=0 Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.969894 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"0b41eafc-06e2-4d4c-8e1c-20220c76be9f","Type":"ContainerDied","Data":"c11918850f7f4d79b51c135bff1cc2801d4bd89a1614e8409dacafb464114839"} Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.969956 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/root-account-create-update-446zj" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.989226 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 10 07:12:41 crc kubenswrapper[4810]: I0110 07:12:41.996979 4810 scope.go:117] "RemoveContainer" containerID="0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.018747 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.033512 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/rabbitmq-server-0" podUID="13c176cb-a9d8-49ed-9d35-78a2975d9dd6" containerName="rabbitmq" containerID="cri-o://accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3" gracePeriod=604800 Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.036709 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/root-account-create-update-446zj"] Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.056445 4810 scope.go:117] "RemoveContainer" containerID="67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262" Jan 10 07:12:42 crc kubenswrapper[4810]: E0110 07:12:42.056974 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262\": container with ID starting with 67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262 not found: ID does not exist" containerID="67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.057011 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262"} err="failed to get container status \"67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262\": rpc error: code = NotFound desc = could not find container \"67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262\": container with ID starting with 67a99c00cac97b1c19d219e184d18d529285066fc5dd16563659a9bdb34e3262 not found: ID does not exist" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.057046 4810 scope.go:117] "RemoveContainer" containerID="0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525" Jan 10 07:12:42 crc kubenswrapper[4810]: E0110 07:12:42.059494 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525\": container with ID starting with 0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525 not found: ID does not exist" containerID="0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.059554 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525"} err="failed to get container status \"0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525\": rpc error: code = NotFound desc = could not find container \"0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525\": container with ID starting with 0667d29df15d6d35e4e5e52c6f0e98d67a8d5c8d3f5a02fbea3f173b0dd2e525 not found: ID does not exist" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.070760 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/root-account-create-update-446zj"] Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.077624 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.090668 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/openstack-galera-2"] Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.123581 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-config-data\") pod \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.123703 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kolla-config\") pod \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.123828 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6drj\" (UniqueName: \"kubernetes.io/projected/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kube-api-access-z6drj\") pod \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\" (UID: \"0b41eafc-06e2-4d4c-8e1c-20220c76be9f\") " Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.124350 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "0b41eafc-06e2-4d4c-8e1c-20220c76be9f" (UID: "0b41eafc-06e2-4d4c-8e1c-20220c76be9f"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.124735 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-config-data" (OuterVolumeSpecName: "config-data") pod "0b41eafc-06e2-4d4c-8e1c-20220c76be9f" (UID: "0b41eafc-06e2-4d4c-8e1c-20220c76be9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.133003 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kube-api-access-z6drj" (OuterVolumeSpecName: "kube-api-access-z6drj") pod "0b41eafc-06e2-4d4c-8e1c-20220c76be9f" (UID: "0b41eafc-06e2-4d4c-8e1c-20220c76be9f"). InnerVolumeSpecName "kube-api-access-z6drj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.225736 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/abb6876d-4e9f-48b8-8613-ef077e422be1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.225778 4810 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-config-data\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.225792 4810 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.225804 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdjjh\" (UniqueName: \"kubernetes.io/projected/abb6876d-4e9f-48b8-8613-ef077e422be1-kube-api-access-tdjjh\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.225820 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6drj\" (UniqueName: \"kubernetes.io/projected/0b41eafc-06e2-4d4c-8e1c-20220c76be9f-kube-api-access-z6drj\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.252707 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.330079 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wgn8\" (UniqueName: \"kubernetes.io/projected/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-kube-api-access-8wgn8\") pod \"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e\" (UID: \"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e\") " Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.330208 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-operator-scripts\") pod \"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e\" (UID: \"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e\") " Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.334749 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e" (UID: "4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.337638 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-kube-api-access-8wgn8" (OuterVolumeSpecName: "kube-api-access-8wgn8") pod "4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e" (UID: "4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e"). InnerVolumeSpecName "kube-api-access-8wgn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.432142 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wgn8\" (UniqueName: \"kubernetes.io/projected/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-kube-api-access-8wgn8\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.432225 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.705461 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/openstack-galera-1" podUID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" containerName="galera" containerID="cri-o://aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb" gracePeriod=28 Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.978577 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.978572 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/keystonec4b1-account-delete-nf6x5" event={"ID":"4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e","Type":"ContainerDied","Data":"56b12ea187c6e96f320c54e520cba7b63a69bc5e92077bf36e6c4b062d0e35ff"} Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.979241 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56b12ea187c6e96f320c54e520cba7b63a69bc5e92077bf36e6c4b062d0e35ff" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.982126 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/memcached-0" event={"ID":"0b41eafc-06e2-4d4c-8e1c-20220c76be9f","Type":"ContainerDied","Data":"29643a49a8e7ec32a80d03c7b842956f5a293d0c987c47b5317fb79d8ded5c00"} Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.982175 4810 scope.go:117] "RemoveContainer" containerID="c11918850f7f4d79b51c135bff1cc2801d4bd89a1614e8409dacafb464114839" Jan 10 07:12:42 crc kubenswrapper[4810]: I0110 07:12:42.982281 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/memcached-0" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.050673 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.058909 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/memcached-0"] Jan 10 07:12:43 crc kubenswrapper[4810]: E0110 07:12:43.545605 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 10 07:12:43 crc kubenswrapper[4810]: E0110 07:12:43.547061 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 10 07:12:43 crc kubenswrapper[4810]: E0110 07:12:43.549088 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 10 07:12:43 crc kubenswrapper[4810]: E0110 07:12:43.549122 4810 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="swift-kuttl-tests/openstack-galera-1" podUID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" containerName="galera" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.573736 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.574790 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-db-create-w9npk"] Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.581027 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-db-create-w9npk"] Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.591411 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4"] Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.596465 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/keystonec4b1-account-delete-nf6x5"] Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.601068 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystone-c4b1-account-create-update-4b6n4"] Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.610404 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/keystonec4b1-account-delete-nf6x5"] Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.649034 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqqdz\" (UniqueName: \"kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-kube-api-access-jqqdz\") pod \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.649099 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-erlang-cookie\") pod \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.649152 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-confd\") pod \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.649228 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-erlang-cookie-secret\") pod \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.649258 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-plugins-conf\") pod \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.649282 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-plugins\") pod \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.649331 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-pod-info\") pod \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.649479 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\") pod \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\" (UID: \"13c176cb-a9d8-49ed-9d35-78a2975d9dd6\") " Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.649915 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "13c176cb-a9d8-49ed-9d35-78a2975d9dd6" (UID: "13c176cb-a9d8-49ed-9d35-78a2975d9dd6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.650226 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "13c176cb-a9d8-49ed-9d35-78a2975d9dd6" (UID: "13c176cb-a9d8-49ed-9d35-78a2975d9dd6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.650284 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "13c176cb-a9d8-49ed-9d35-78a2975d9dd6" (UID: "13c176cb-a9d8-49ed-9d35-78a2975d9dd6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.654335 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "13c176cb-a9d8-49ed-9d35-78a2975d9dd6" (UID: "13c176cb-a9d8-49ed-9d35-78a2975d9dd6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.654480 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-kube-api-access-jqqdz" (OuterVolumeSpecName: "kube-api-access-jqqdz") pod "13c176cb-a9d8-49ed-9d35-78a2975d9dd6" (UID: "13c176cb-a9d8-49ed-9d35-78a2975d9dd6"). InnerVolumeSpecName "kube-api-access-jqqdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.658424 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-pod-info" (OuterVolumeSpecName: "pod-info") pod "13c176cb-a9d8-49ed-9d35-78a2975d9dd6" (UID: "13c176cb-a9d8-49ed-9d35-78a2975d9dd6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.658791 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee" (OuterVolumeSpecName: "persistence") pod "13c176cb-a9d8-49ed-9d35-78a2975d9dd6" (UID: "13c176cb-a9d8-49ed-9d35-78a2975d9dd6"). InnerVolumeSpecName "pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.702231 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b41eafc-06e2-4d4c-8e1c-20220c76be9f" path="/var/lib/kubelet/pods/0b41eafc-06e2-4d4c-8e1c-20220c76be9f/volumes" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.702877 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e" path="/var/lib/kubelet/pods/4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e/volumes" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.703658 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906dd112-4b63-4f4a-af23-98e0b48be7f3" path="/var/lib/kubelet/pods/906dd112-4b63-4f4a-af23-98e0b48be7f3/volumes" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.704170 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abb6876d-4e9f-48b8-8613-ef077e422be1" path="/var/lib/kubelet/pods/abb6876d-4e9f-48b8-8613-ef077e422be1/volumes" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.705364 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb0387ec-3523-41da-b323-7249eb242b4d" path="/var/lib/kubelet/pods/cb0387ec-3523-41da-b323-7249eb242b4d/volumes" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.706264 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f038cf-6ade-41e7-a85d-0c73babd42a2" path="/var/lib/kubelet/pods/f4f038cf-6ade-41e7-a85d-0c73babd42a2/volumes" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.711807 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "13c176cb-a9d8-49ed-9d35-78a2975d9dd6" (UID: "13c176cb-a9d8-49ed-9d35-78a2975d9dd6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.751181 4810 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-pod-info\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.751307 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\") on node \"crc\" " Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.751334 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqqdz\" (UniqueName: \"kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-kube-api-access-jqqdz\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.751354 4810 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.751365 4810 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.751377 4810 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.751412 4810 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.751424 4810 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/13c176cb-a9d8-49ed-9d35-78a2975d9dd6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.765081 4810 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.765275 4810 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee") on node "crc" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.853318 4810 reconciler_common.go:293] "Volume detached for volume \"pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d2dd70f6-18a8-48c3-a615-e069cbcb7bee\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.991320 4810 generic.go:334] "Generic (PLEG): container finished" podID="13c176cb-a9d8-49ed-9d35-78a2975d9dd6" containerID="accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3" exitCode=0 Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.991361 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"13c176cb-a9d8-49ed-9d35-78a2975d9dd6","Type":"ContainerDied","Data":"accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3"} Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.991387 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/rabbitmq-server-0" event={"ID":"13c176cb-a9d8-49ed-9d35-78a2975d9dd6","Type":"ContainerDied","Data":"a5028fcbbe63ce9c6c73b201b7c92c92f98c9dffabf4d95161f2b87b7885b155"} Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.991404 4810 scope.go:117] "RemoveContainer" containerID="accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3" Jan 10 07:12:43 crc kubenswrapper[4810]: I0110 07:12:43.991496 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/rabbitmq-server-0" Jan 10 07:12:44 crc kubenswrapper[4810]: I0110 07:12:44.025314 4810 scope.go:117] "RemoveContainer" containerID="9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327" Jan 10 07:12:44 crc kubenswrapper[4810]: I0110 07:12:44.035768 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 10 07:12:44 crc kubenswrapper[4810]: I0110 07:12:44.041615 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/rabbitmq-server-0"] Jan 10 07:12:44 crc kubenswrapper[4810]: I0110 07:12:44.041782 4810 scope.go:117] "RemoveContainer" containerID="accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3" Jan 10 07:12:44 crc kubenswrapper[4810]: E0110 07:12:44.042306 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3\": container with ID starting with accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3 not found: ID does not exist" containerID="accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3" Jan 10 07:12:44 crc kubenswrapper[4810]: I0110 07:12:44.042339 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3"} err="failed to get container status \"accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3\": rpc error: code = NotFound desc = could not find container \"accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3\": container with ID starting with accf5782f118e6a962396bd37a632b177012a5e9914f7f905f8766a80feb6dd3 not found: ID does not exist" Jan 10 07:12:44 crc kubenswrapper[4810]: I0110 07:12:44.042364 4810 scope.go:117] "RemoveContainer" containerID="9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327" Jan 10 07:12:44 crc kubenswrapper[4810]: E0110 07:12:44.042768 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327\": container with ID starting with 9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327 not found: ID does not exist" containerID="9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327" Jan 10 07:12:44 crc kubenswrapper[4810]: I0110 07:12:44.042793 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327"} err="failed to get container status \"9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327\": rpc error: code = NotFound desc = could not find container \"9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327\": container with ID starting with 9de41f5b3f8c47cf3fb986f5f82afe14c4c2d7aeb129dbeef40727c175a8f327 not found: ID does not exist" Jan 10 07:12:44 crc kubenswrapper[4810]: I0110 07:12:44.715363 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="swift-kuttl-tests/openstack-galera-0" podUID="692848fd-4cf4-401e-819f-c14ac900efea" containerName="galera" containerID="cri-o://bf7e22e268f1c6b6474a96ff020a3cf3ca568fcb6c44c8c82772fc80668447ff" gracePeriod=26 Jan 10 07:12:45 crc kubenswrapper[4810]: I0110 07:12:45.705083 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c176cb-a9d8-49ed-9d35-78a2975d9dd6" path="/var/lib/kubelet/pods/13c176cb-a9d8-49ed-9d35-78a2975d9dd6/volumes" Jan 10 07:12:47 crc kubenswrapper[4810]: I0110 07:12:47.891790 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz"] Jan 10 07:12:47 crc kubenswrapper[4810]: I0110 07:12:47.892402 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" podUID="6ab3773c-d14d-45f3-8aab-fb5b3eb075ee" containerName="manager" containerID="cri-o://94b80991cc8e7345157481034bf2533ae23fd39de75ed8e18e6dc57c7eaea2fb" gracePeriod=10 Jan 10 07:12:48 crc kubenswrapper[4810]: I0110 07:12:48.152833 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-vn7dw"] Jan 10 07:12:48 crc kubenswrapper[4810]: I0110 07:12:48.153348 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-index-vn7dw" podUID="0b3bfd13-4157-4289-981b-55602232df01" containerName="registry-server" containerID="cri-o://2e3dac380ab3a2450bb7290a406204ebc235725a2cd74069ce8f82180493914e" gracePeriod=30 Jan 10 07:12:48 crc kubenswrapper[4810]: I0110 07:12:48.183219 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr"] Jan 10 07:12:48 crc kubenswrapper[4810]: I0110 07:12:48.193726 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/1f4617bb660cd39f908cbd2c87e4b067b4ac51b6cbd73a64eb6934384cqf7lr"] Jan 10 07:12:48 crc kubenswrapper[4810]: E0110 07:12:48.304311 4810 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b3bfd13_4157_4289_981b_55602232df01.slice/crio-2e3dac380ab3a2450bb7290a406204ebc235725a2cd74069ce8f82180493914e.scope\": RecentStats: unable to find data in memory cache]" Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.033074 4810 generic.go:334] "Generic (PLEG): container finished" podID="0b3bfd13-4157-4289-981b-55602232df01" containerID="2e3dac380ab3a2450bb7290a406204ebc235725a2cd74069ce8f82180493914e" exitCode=0 Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.033170 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-vn7dw" event={"ID":"0b3bfd13-4157-4289-981b-55602232df01","Type":"ContainerDied","Data":"2e3dac380ab3a2450bb7290a406204ebc235725a2cd74069ce8f82180493914e"} Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.036568 4810 generic.go:334] "Generic (PLEG): container finished" podID="6ab3773c-d14d-45f3-8aab-fb5b3eb075ee" containerID="94b80991cc8e7345157481034bf2533ae23fd39de75ed8e18e6dc57c7eaea2fb" exitCode=0 Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.036601 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" event={"ID":"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee","Type":"ContainerDied","Data":"94b80991cc8e7345157481034bf2533ae23fd39de75ed8e18e6dc57c7eaea2fb"} Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.275093 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.424734 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-apiservice-cert\") pod \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.424848 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6pd9\" (UniqueName: \"kubernetes.io/projected/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-kube-api-access-w6pd9\") pod \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.424925 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-webhook-cert\") pod \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\" (UID: \"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee\") " Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.430190 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "6ab3773c-d14d-45f3-8aab-fb5b3eb075ee" (UID: "6ab3773c-d14d-45f3-8aab-fb5b3eb075ee"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.430212 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-kube-api-access-w6pd9" (OuterVolumeSpecName: "kube-api-access-w6pd9") pod "6ab3773c-d14d-45f3-8aab-fb5b3eb075ee" (UID: "6ab3773c-d14d-45f3-8aab-fb5b3eb075ee"). InnerVolumeSpecName "kube-api-access-w6pd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.430399 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6ab3773c-d14d-45f3-8aab-fb5b3eb075ee" (UID: "6ab3773c-d14d-45f3-8aab-fb5b3eb075ee"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.526819 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6pd9\" (UniqueName: \"kubernetes.io/projected/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-kube-api-access-w6pd9\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.526854 4810 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.526865 4810 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:49 crc kubenswrapper[4810]: I0110 07:12:49.700896 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58f3a749-f75e-4359-975c-cde9d6cca018" path="/var/lib/kubelet/pods/58f3a749-f75e-4359-975c-cde9d6cca018/volumes" Jan 10 07:12:50 crc kubenswrapper[4810]: I0110 07:12:50.048660 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" event={"ID":"6ab3773c-d14d-45f3-8aab-fb5b3eb075ee","Type":"ContainerDied","Data":"ebea0ebbf22fdf163bb92ff44d3dd495dcb6da1c98dac28d044293020ad440c4"} Jan 10 07:12:50 crc kubenswrapper[4810]: I0110 07:12:50.048709 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz" Jan 10 07:12:50 crc kubenswrapper[4810]: I0110 07:12:50.049006 4810 scope.go:117] "RemoveContainer" containerID="94b80991cc8e7345157481034bf2533ae23fd39de75ed8e18e6dc57c7eaea2fb" Jan 10 07:12:50 crc kubenswrapper[4810]: I0110 07:12:50.075123 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz"] Jan 10 07:12:50 crc kubenswrapper[4810]: I0110 07:12:50.081116 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6ddd477b75-wq6zz"] Jan 10 07:12:50 crc kubenswrapper[4810]: I0110 07:12:50.115305 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-vn7dw" Jan 10 07:12:50 crc kubenswrapper[4810]: I0110 07:12:50.237452 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dphnr\" (UniqueName: \"kubernetes.io/projected/0b3bfd13-4157-4289-981b-55602232df01-kube-api-access-dphnr\") pod \"0b3bfd13-4157-4289-981b-55602232df01\" (UID: \"0b3bfd13-4157-4289-981b-55602232df01\") " Jan 10 07:12:50 crc kubenswrapper[4810]: I0110 07:12:50.244557 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3bfd13-4157-4289-981b-55602232df01-kube-api-access-dphnr" (OuterVolumeSpecName: "kube-api-access-dphnr") pod "0b3bfd13-4157-4289-981b-55602232df01" (UID: "0b3bfd13-4157-4289-981b-55602232df01"). InnerVolumeSpecName "kube-api-access-dphnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:50 crc kubenswrapper[4810]: I0110 07:12:50.339248 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dphnr\" (UniqueName: \"kubernetes.io/projected/0b3bfd13-4157-4289-981b-55602232df01-kube-api-access-dphnr\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.058122 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-vn7dw" event={"ID":"0b3bfd13-4157-4289-981b-55602232df01","Type":"ContainerDied","Data":"ad85aa4c5d5c7b3a735e4dd8c82a81cdc8dc53e7d5e661bfaadbb838028a31a0"} Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.058455 4810 scope.go:117] "RemoveContainer" containerID="2e3dac380ab3a2450bb7290a406204ebc235725a2cd74069ce8f82180493914e" Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.058227 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-vn7dw" Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.091217 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-vn7dw"] Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.095390 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-index-vn7dw"] Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.187564 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg"] Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.188016 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" podUID="9e7172d7-ebe7-4a16-878f-a82b25cdecbb" containerName="manager" containerID="cri-o://9189bec18c282ea248daa8eb931becd8a29f3c4d69d54f70c935c8e586a42233" gracePeriod=10 Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.535917 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-rp747"] Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.536154 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-index-rp747" podUID="1ab9e5c2-daee-4d73-b55e-df866fba1922" containerName="registry-server" containerID="cri-o://45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938" gracePeriod=30 Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.574539 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7"] Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.581738 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/784e42162c0740cf7d885ef9fa703fa231b767e9f77df307358e54cdeft7zp7"] Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.703559 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3bfd13-4157-4289-981b-55602232df01" path="/var/lib/kubelet/pods/0b3bfd13-4157-4289-981b-55602232df01/volumes" Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.704039 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab3773c-d14d-45f3-8aab-fb5b3eb075ee" path="/var/lib/kubelet/pods/6ab3773c-d14d-45f3-8aab-fb5b3eb075ee/volumes" Jan 10 07:12:51 crc kubenswrapper[4810]: I0110 07:12:51.704539 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e10035a-26d7-4c1e-932c-e5472e945541" path="/var/lib/kubelet/pods/8e10035a-26d7-4c1e-932c-e5472e945541/volumes" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.070673 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-rp747" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.083329 4810 generic.go:334] "Generic (PLEG): container finished" podID="9e7172d7-ebe7-4a16-878f-a82b25cdecbb" containerID="9189bec18c282ea248daa8eb931becd8a29f3c4d69d54f70c935c8e586a42233" exitCode=0 Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.083439 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" event={"ID":"9e7172d7-ebe7-4a16-878f-a82b25cdecbb","Type":"ContainerDied","Data":"9189bec18c282ea248daa8eb931becd8a29f3c4d69d54f70c935c8e586a42233"} Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.095826 4810 generic.go:334] "Generic (PLEG): container finished" podID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" containerID="aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb" exitCode=0 Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.095896 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"06676b15-3e4e-4fa8-bfe6-3d95ad522c31","Type":"ContainerDied","Data":"aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb"} Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.107030 4810 generic.go:334] "Generic (PLEG): container finished" podID="692848fd-4cf4-401e-819f-c14ac900efea" containerID="bf7e22e268f1c6b6474a96ff020a3cf3ca568fcb6c44c8c82772fc80668447ff" exitCode=0 Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.107124 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"692848fd-4cf4-401e-819f-c14ac900efea","Type":"ContainerDied","Data":"bf7e22e268f1c6b6474a96ff020a3cf3ca568fcb6c44c8c82772fc80668447ff"} Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.110146 4810 generic.go:334] "Generic (PLEG): container finished" podID="1ab9e5c2-daee-4d73-b55e-df866fba1922" containerID="45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938" exitCode=0 Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.110215 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-rp747" event={"ID":"1ab9e5c2-daee-4d73-b55e-df866fba1922","Type":"ContainerDied","Data":"45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938"} Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.110242 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-index-rp747" event={"ID":"1ab9e5c2-daee-4d73-b55e-df866fba1922","Type":"ContainerDied","Data":"b727f1d5f91d4c15691c3f321d2700c44961d8bd49e5b04a4502b417776cb168"} Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.110258 4810 scope.go:117] "RemoveContainer" containerID="45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.110362 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-index-rp747" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.124044 4810 scope.go:117] "RemoveContainer" containerID="45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938" Jan 10 07:12:52 crc kubenswrapper[4810]: E0110 07:12:52.124301 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938\": container with ID starting with 45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938 not found: ID does not exist" containerID="45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.124341 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938"} err="failed to get container status \"45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938\": rpc error: code = NotFound desc = could not find container \"45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938\": container with ID starting with 45c058465eed30c7a33bbe63121d74f861ac4f2c1c836891cc699ca9f54f1938 not found: ID does not exist" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.126168 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.170462 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99mnh\" (UniqueName: \"kubernetes.io/projected/1ab9e5c2-daee-4d73-b55e-df866fba1922-kube-api-access-99mnh\") pod \"1ab9e5c2-daee-4d73-b55e-df866fba1922\" (UID: \"1ab9e5c2-daee-4d73-b55e-df866fba1922\") " Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.176074 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ab9e5c2-daee-4d73-b55e-df866fba1922-kube-api-access-99mnh" (OuterVolumeSpecName: "kube-api-access-99mnh") pod "1ab9e5c2-daee-4d73-b55e-df866fba1922" (UID: "1ab9e5c2-daee-4d73-b55e-df866fba1922"). InnerVolumeSpecName "kube-api-access-99mnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.271313 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-config-data-default\") pod \"692848fd-4cf4-401e-819f-c14ac900efea\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.271378 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"692848fd-4cf4-401e-819f-c14ac900efea\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.271438 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-operator-scripts\") pod \"692848fd-4cf4-401e-819f-c14ac900efea\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.271473 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk8tl\" (UniqueName: \"kubernetes.io/projected/692848fd-4cf4-401e-819f-c14ac900efea-kube-api-access-nk8tl\") pod \"692848fd-4cf4-401e-819f-c14ac900efea\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.271511 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-kolla-config\") pod \"692848fd-4cf4-401e-819f-c14ac900efea\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.271533 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/692848fd-4cf4-401e-819f-c14ac900efea-config-data-generated\") pod \"692848fd-4cf4-401e-819f-c14ac900efea\" (UID: \"692848fd-4cf4-401e-819f-c14ac900efea\") " Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.271830 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99mnh\" (UniqueName: \"kubernetes.io/projected/1ab9e5c2-daee-4d73-b55e-df866fba1922-kube-api-access-99mnh\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.271988 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692848fd-4cf4-401e-819f-c14ac900efea-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "692848fd-4cf4-401e-819f-c14ac900efea" (UID: "692848fd-4cf4-401e-819f-c14ac900efea"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.272101 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "692848fd-4cf4-401e-819f-c14ac900efea" (UID: "692848fd-4cf4-401e-819f-c14ac900efea"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.272319 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "692848fd-4cf4-401e-819f-c14ac900efea" (UID: "692848fd-4cf4-401e-819f-c14ac900efea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.273001 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "692848fd-4cf4-401e-819f-c14ac900efea" (UID: "692848fd-4cf4-401e-819f-c14ac900efea"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.274605 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692848fd-4cf4-401e-819f-c14ac900efea-kube-api-access-nk8tl" (OuterVolumeSpecName: "kube-api-access-nk8tl") pod "692848fd-4cf4-401e-819f-c14ac900efea" (UID: "692848fd-4cf4-401e-819f-c14ac900efea"). InnerVolumeSpecName "kube-api-access-nk8tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.279427 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "mysql-db") pod "692848fd-4cf4-401e-819f-c14ac900efea" (UID: "692848fd-4cf4-401e-819f-c14ac900efea"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.372747 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.372807 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.372819 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.372829 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk8tl\" (UniqueName: \"kubernetes.io/projected/692848fd-4cf4-401e-819f-c14ac900efea-kube-api-access-nk8tl\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.372839 4810 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/692848fd-4cf4-401e-819f-c14ac900efea-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.372848 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/692848fd-4cf4-401e-819f-c14ac900efea-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.383600 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.440345 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-index-rp747"] Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.444110 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/barbican-operator-index-rp747"] Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.473901 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:52 crc kubenswrapper[4810]: I0110 07:12:52.978831 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.079586 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-apiservice-cert\") pod \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.079680 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-webhook-cert\") pod \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.079860 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nhwm\" (UniqueName: \"kubernetes.io/projected/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-kube-api-access-4nhwm\") pod \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\" (UID: \"9e7172d7-ebe7-4a16-878f-a82b25cdecbb\") " Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.082846 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9e7172d7-ebe7-4a16-878f-a82b25cdecbb" (UID: "9e7172d7-ebe7-4a16-878f-a82b25cdecbb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.083117 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "9e7172d7-ebe7-4a16-878f-a82b25cdecbb" (UID: "9e7172d7-ebe7-4a16-878f-a82b25cdecbb"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.083128 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-kube-api-access-4nhwm" (OuterVolumeSpecName: "kube-api-access-4nhwm") pod "9e7172d7-ebe7-4a16-878f-a82b25cdecbb" (UID: "9e7172d7-ebe7-4a16-878f-a82b25cdecbb"). InnerVolumeSpecName "kube-api-access-4nhwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.121772 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-0" event={"ID":"692848fd-4cf4-401e-819f-c14ac900efea","Type":"ContainerDied","Data":"68ec0cfd3704172a795730f303e277df9aa2eff64f2a4d8120960d24b6aca170"} Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.121808 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-0" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.121826 4810 scope.go:117] "RemoveContainer" containerID="bf7e22e268f1c6b6474a96ff020a3cf3ca568fcb6c44c8c82772fc80668447ff" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.124653 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" event={"ID":"9e7172d7-ebe7-4a16-878f-a82b25cdecbb","Type":"ContainerDied","Data":"e9b0af997325e095e38d52f4fb6463a209d92bd503cdf6ec92550a777ff15e03"} Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.124717 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.142452 4810 scope.go:117] "RemoveContainer" containerID="9397b9766e68dbdccc09827f2dde91e3737ff25e5ffef0796c0e6c0cda613ff3" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.156571 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.170627 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/openstack-galera-0"] Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.180824 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nhwm\" (UniqueName: \"kubernetes.io/projected/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-kube-api-access-4nhwm\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.180855 4810 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.180865 4810 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9e7172d7-ebe7-4a16-878f-a82b25cdecbb-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.182626 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg"] Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.184015 4810 scope.go:117] "RemoveContainer" containerID="9189bec18c282ea248daa8eb931becd8a29f3c4d69d54f70c935c8e586a42233" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.188682 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-8bf56dbd-zkzlg"] Jan 10 07:12:53 crc kubenswrapper[4810]: E0110 07:12:53.544943 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb is running failed: container process not found" containerID="aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 10 07:12:53 crc kubenswrapper[4810]: E0110 07:12:53.545780 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb is running failed: container process not found" containerID="aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 10 07:12:53 crc kubenswrapper[4810]: E0110 07:12:53.546312 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb is running failed: container process not found" containerID="aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 10 07:12:53 crc kubenswrapper[4810]: E0110 07:12:53.546371 4810 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb is running failed: container process not found" probeType="Readiness" pod="swift-kuttl-tests/openstack-galera-1" podUID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" containerName="galera" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.701379 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ab9e5c2-daee-4d73-b55e-df866fba1922" path="/var/lib/kubelet/pods/1ab9e5c2-daee-4d73-b55e-df866fba1922/volumes" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.702773 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="692848fd-4cf4-401e-819f-c14ac900efea" path="/var/lib/kubelet/pods/692848fd-4cf4-401e-819f-c14ac900efea/volumes" Jan 10 07:12:53 crc kubenswrapper[4810]: I0110 07:12:53.704065 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e7172d7-ebe7-4a16-878f-a82b25cdecbb" path="/var/lib/kubelet/pods/9e7172d7-ebe7-4a16-878f-a82b25cdecbb/volumes" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.234648 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.312685 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kolla-config\") pod \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.313023 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-default\") pod \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.313106 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-operator-scripts\") pod \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.313221 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5jk2\" (UniqueName: \"kubernetes.io/projected/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kube-api-access-r5jk2\") pod \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.313314 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-generated\") pod \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.313408 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\" (UID: \"06676b15-3e4e-4fa8-bfe6-3d95ad522c31\") " Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.313806 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "06676b15-3e4e-4fa8-bfe6-3d95ad522c31" (UID: "06676b15-3e4e-4fa8-bfe6-3d95ad522c31"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.313871 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "06676b15-3e4e-4fa8-bfe6-3d95ad522c31" (UID: "06676b15-3e4e-4fa8-bfe6-3d95ad522c31"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.313905 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "06676b15-3e4e-4fa8-bfe6-3d95ad522c31" (UID: "06676b15-3e4e-4fa8-bfe6-3d95ad522c31"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.313955 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06676b15-3e4e-4fa8-bfe6-3d95ad522c31" (UID: "06676b15-3e4e-4fa8-bfe6-3d95ad522c31"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.321850 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kube-api-access-r5jk2" (OuterVolumeSpecName: "kube-api-access-r5jk2") pod "06676b15-3e4e-4fa8-bfe6-3d95ad522c31" (UID: "06676b15-3e4e-4fa8-bfe6-3d95ad522c31"). InnerVolumeSpecName "kube-api-access-r5jk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.331750 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "06676b15-3e4e-4fa8-bfe6-3d95ad522c31" (UID: "06676b15-3e4e-4fa8-bfe6-3d95ad522c31"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.393377 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq"] Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.393641 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" podUID="1067de2e-5591-420e-a068-a3279fb17d44" containerName="manager" containerID="cri-o://1d94ec8a64b2454754fe232550dc25e5ca0da17af4ef46b3fcc62d10e5d9ff64" gracePeriod=10 Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.415306 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.415340 4810 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.415366 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5jk2\" (UniqueName: \"kubernetes.io/projected/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kube-api-access-r5jk2\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.415377 4810 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.415409 4810 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.415419 4810 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/06676b15-3e4e-4fa8-bfe6-3d95ad522c31-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.427605 4810 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.516441 4810 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.656615 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-tp9xq"] Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.656803 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-tp9xq" podUID="31586912-6862-4b77-892d-76af90a5b22a" containerName="registry-server" containerID="cri-o://1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4" gracePeriod=30 Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.680705 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv"] Jan 10 07:12:54 crc kubenswrapper[4810]: I0110 07:12:54.685152 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/11e5c261382d818846f7e96019ed262d8469b8add844de0b465d0f0c78b9mgv"] Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.142222 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-tp9xq" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.161301 4810 generic.go:334] "Generic (PLEG): container finished" podID="1067de2e-5591-420e-a068-a3279fb17d44" containerID="1d94ec8a64b2454754fe232550dc25e5ca0da17af4ef46b3fcc62d10e5d9ff64" exitCode=0 Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.161345 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" event={"ID":"1067de2e-5591-420e-a068-a3279fb17d44","Type":"ContainerDied","Data":"1d94ec8a64b2454754fe232550dc25e5ca0da17af4ef46b3fcc62d10e5d9ff64"} Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.163307 4810 generic.go:334] "Generic (PLEG): container finished" podID="31586912-6862-4b77-892d-76af90a5b22a" containerID="1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4" exitCode=0 Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.163370 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-tp9xq" event={"ID":"31586912-6862-4b77-892d-76af90a5b22a","Type":"ContainerDied","Data":"1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4"} Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.163375 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-tp9xq" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.163393 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-tp9xq" event={"ID":"31586912-6862-4b77-892d-76af90a5b22a","Type":"ContainerDied","Data":"81c17ca9f21361c23dc8706fd494031df33b3e84cb6ae036990849e9151a89e7"} Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.163414 4810 scope.go:117] "RemoveContainer" containerID="1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.165559 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="swift-kuttl-tests/openstack-galera-1" event={"ID":"06676b15-3e4e-4fa8-bfe6-3d95ad522c31","Type":"ContainerDied","Data":"4f2cb68a314cac2fa28de279b279a5fe03603771c4f457953e9b214626e040d6"} Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.165645 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="swift-kuttl-tests/openstack-galera-1" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.178739 4810 scope.go:117] "RemoveContainer" containerID="1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4" Jan 10 07:12:55 crc kubenswrapper[4810]: E0110 07:12:55.179796 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4\": container with ID starting with 1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4 not found: ID does not exist" containerID="1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.179824 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4"} err="failed to get container status \"1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4\": rpc error: code = NotFound desc = could not find container \"1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4\": container with ID starting with 1caa7fe14f267fe3e7e193bf0699f776e819a6b0cb2b5beba411b7108d6ac4a4 not found: ID does not exist" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.179848 4810 scope.go:117] "RemoveContainer" containerID="aa43bfdbc51b68895bc7f0b7111d94d9eff2d0c8be31378f8fab4d7e2fef3dbb" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.209313 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.214919 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["swift-kuttl-tests/openstack-galera-1"] Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.216592 4810 scope.go:117] "RemoveContainer" containerID="792b3672925092a2559afb24530b01d1b2ceb48f24a35df7ea59a6d96ddbc8ae" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.225704 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2stw2\" (UniqueName: \"kubernetes.io/projected/31586912-6862-4b77-892d-76af90a5b22a-kube-api-access-2stw2\") pod \"31586912-6862-4b77-892d-76af90a5b22a\" (UID: \"31586912-6862-4b77-892d-76af90a5b22a\") " Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.230307 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31586912-6862-4b77-892d-76af90a5b22a-kube-api-access-2stw2" (OuterVolumeSpecName: "kube-api-access-2stw2") pod "31586912-6862-4b77-892d-76af90a5b22a" (UID: "31586912-6862-4b77-892d-76af90a5b22a"). InnerVolumeSpecName "kube-api-access-2stw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.327947 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2stw2\" (UniqueName: \"kubernetes.io/projected/31586912-6862-4b77-892d-76af90a5b22a-kube-api-access-2stw2\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.483059 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.497591 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-tp9xq"] Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.502551 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-tp9xq"] Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.631774 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-apiservice-cert\") pod \"1067de2e-5591-420e-a068-a3279fb17d44\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.631904 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f76bd\" (UniqueName: \"kubernetes.io/projected/1067de2e-5591-420e-a068-a3279fb17d44-kube-api-access-f76bd\") pod \"1067de2e-5591-420e-a068-a3279fb17d44\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.631943 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-webhook-cert\") pod \"1067de2e-5591-420e-a068-a3279fb17d44\" (UID: \"1067de2e-5591-420e-a068-a3279fb17d44\") " Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.635389 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1067de2e-5591-420e-a068-a3279fb17d44-kube-api-access-f76bd" (OuterVolumeSpecName: "kube-api-access-f76bd") pod "1067de2e-5591-420e-a068-a3279fb17d44" (UID: "1067de2e-5591-420e-a068-a3279fb17d44"). InnerVolumeSpecName "kube-api-access-f76bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.635442 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "1067de2e-5591-420e-a068-a3279fb17d44" (UID: "1067de2e-5591-420e-a068-a3279fb17d44"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.636731 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "1067de2e-5591-420e-a068-a3279fb17d44" (UID: "1067de2e-5591-420e-a068-a3279fb17d44"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.699229 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" path="/var/lib/kubelet/pods/06676b15-3e4e-4fa8-bfe6-3d95ad522c31/volumes" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.699822 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31586912-6862-4b77-892d-76af90a5b22a" path="/var/lib/kubelet/pods/31586912-6862-4b77-892d-76af90a5b22a/volumes" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.700393 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6db2faa-2559-40fa-be95-29fb8673afa5" path="/var/lib/kubelet/pods/e6db2faa-2559-40fa-be95-29fb8673afa5/volumes" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.733618 4810 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.733850 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f76bd\" (UniqueName: \"kubernetes.io/projected/1067de2e-5591-420e-a068-a3279fb17d44-kube-api-access-f76bd\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:55 crc kubenswrapper[4810]: I0110 07:12:55.733929 4810 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1067de2e-5591-420e-a068-a3279fb17d44-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 10 07:12:56 crc kubenswrapper[4810]: I0110 07:12:56.174334 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" event={"ID":"1067de2e-5591-420e-a068-a3279fb17d44","Type":"ContainerDied","Data":"bb99ff9b3dbc10a84952f9f63279c94eedb02f677b5de3618585287576b29c00"} Jan 10 07:12:56 crc kubenswrapper[4810]: I0110 07:12:56.174434 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq" Jan 10 07:12:56 crc kubenswrapper[4810]: I0110 07:12:56.174404 4810 scope.go:117] "RemoveContainer" containerID="1d94ec8a64b2454754fe232550dc25e5ca0da17af4ef46b3fcc62d10e5d9ff64" Jan 10 07:12:56 crc kubenswrapper[4810]: I0110 07:12:56.200767 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq"] Jan 10 07:12:56 crc kubenswrapper[4810]: I0110 07:12:56.215122 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-765cdbf886-wmmsq"] Jan 10 07:12:57 crc kubenswrapper[4810]: I0110 07:12:57.708768 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1067de2e-5591-420e-a068-a3279fb17d44" path="/var/lib/kubelet/pods/1067de2e-5591-420e-a068-a3279fb17d44/volumes" Jan 10 07:12:59 crc kubenswrapper[4810]: I0110 07:12:59.715828 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd"] Jan 10 07:12:59 crc kubenswrapper[4810]: I0110 07:12:59.716424 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" podUID="5f6d9ded-28c8-45af-a726-a1165a822d3e" containerName="operator" containerID="cri-o://cda67367bc1ec5c505976945c28294226968305fa1eae74b7f512c0345895c06" gracePeriod=10 Jan 10 07:13:00 crc kubenswrapper[4810]: I0110 07:13:00.000589 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-4ph29"] Jan 10 07:13:00 crc kubenswrapper[4810]: I0110 07:13:00.000892 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" podUID="c0ce8179-abd0-4cbe-8bf9-9aea18680a9c" containerName="registry-server" containerID="cri-o://080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8" gracePeriod=30 Jan 10 07:13:00 crc kubenswrapper[4810]: I0110 07:13:00.033947 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv"] Jan 10 07:13:00 crc kubenswrapper[4810]: I0110 07:13:00.046447 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590gcbfv"] Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.218526 4810 generic.go:334] "Generic (PLEG): container finished" podID="c0ce8179-abd0-4cbe-8bf9-9aea18680a9c" containerID="080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8" exitCode=0 Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.218573 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" event={"ID":"c0ce8179-abd0-4cbe-8bf9-9aea18680a9c","Type":"ContainerDied","Data":"080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8"} Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.221975 4810 generic.go:334] "Generic (PLEG): container finished" podID="5f6d9ded-28c8-45af-a726-a1165a822d3e" containerID="cda67367bc1ec5c505976945c28294226968305fa1eae74b7f512c0345895c06" exitCode=0 Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.222040 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" event={"ID":"5f6d9ded-28c8-45af-a726-a1165a822d3e","Type":"ContainerDied","Data":"cda67367bc1ec5c505976945c28294226968305fa1eae74b7f512c0345895c06"} Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.570753 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" Jan 10 07:13:01 crc kubenswrapper[4810]: E0110 07:13:01.592567 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8 is running failed: container process not found" containerID="080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 07:13:01 crc kubenswrapper[4810]: E0110 07:13:01.593093 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8 is running failed: container process not found" containerID="080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 07:13:01 crc kubenswrapper[4810]: E0110 07:13:01.593596 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8 is running failed: container process not found" containerID="080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 07:13:01 crc kubenswrapper[4810]: E0110 07:13:01.593630 4810 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8 is running failed: container process not found" probeType="Readiness" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" podUID="c0ce8179-abd0-4cbe-8bf9-9aea18680a9c" containerName="registry-server" Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.633546 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.662653 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrcxz\" (UniqueName: \"kubernetes.io/projected/5f6d9ded-28c8-45af-a726-a1165a822d3e-kube-api-access-wrcxz\") pod \"5f6d9ded-28c8-45af-a726-a1165a822d3e\" (UID: \"5f6d9ded-28c8-45af-a726-a1165a822d3e\") " Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.668496 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f6d9ded-28c8-45af-a726-a1165a822d3e-kube-api-access-wrcxz" (OuterVolumeSpecName: "kube-api-access-wrcxz") pod "5f6d9ded-28c8-45af-a726-a1165a822d3e" (UID: "5f6d9ded-28c8-45af-a726-a1165a822d3e"). InnerVolumeSpecName "kube-api-access-wrcxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.699704 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc" path="/var/lib/kubelet/pods/5d5f0dbc-99ea-4ed7-b7c2-235a84ed86fc/volumes" Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.763994 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmrzh\" (UniqueName: \"kubernetes.io/projected/c0ce8179-abd0-4cbe-8bf9-9aea18680a9c-kube-api-access-nmrzh\") pod \"c0ce8179-abd0-4cbe-8bf9-9aea18680a9c\" (UID: \"c0ce8179-abd0-4cbe-8bf9-9aea18680a9c\") " Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.764293 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrcxz\" (UniqueName: \"kubernetes.io/projected/5f6d9ded-28c8-45af-a726-a1165a822d3e-kube-api-access-wrcxz\") on node \"crc\" DevicePath \"\"" Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.767224 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0ce8179-abd0-4cbe-8bf9-9aea18680a9c-kube-api-access-nmrzh" (OuterVolumeSpecName: "kube-api-access-nmrzh") pod "c0ce8179-abd0-4cbe-8bf9-9aea18680a9c" (UID: "c0ce8179-abd0-4cbe-8bf9-9aea18680a9c"). InnerVolumeSpecName "kube-api-access-nmrzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:13:01 crc kubenswrapper[4810]: I0110 07:13:01.865567 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmrzh\" (UniqueName: \"kubernetes.io/projected/c0ce8179-abd0-4cbe-8bf9-9aea18680a9c-kube-api-access-nmrzh\") on node \"crc\" DevicePath \"\"" Jan 10 07:13:02 crc kubenswrapper[4810]: I0110 07:13:02.228171 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" event={"ID":"5f6d9ded-28c8-45af-a726-a1165a822d3e","Type":"ContainerDied","Data":"f0a23bcf314611c4d05403b720e9314386cbaa056bf940a2f7979424402ecfa1"} Jan 10 07:13:02 crc kubenswrapper[4810]: I0110 07:13:02.228237 4810 scope.go:117] "RemoveContainer" containerID="cda67367bc1ec5c505976945c28294226968305fa1eae74b7f512c0345895c06" Jan 10 07:13:02 crc kubenswrapper[4810]: I0110 07:13:02.228356 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd" Jan 10 07:13:02 crc kubenswrapper[4810]: I0110 07:13:02.232846 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" event={"ID":"c0ce8179-abd0-4cbe-8bf9-9aea18680a9c","Type":"ContainerDied","Data":"df8ce4ce014f354d5814cd1ee405f66c9d631bcf698dfd4c1b79ebf161fa840b"} Jan 10 07:13:02 crc kubenswrapper[4810]: I0110 07:13:02.232925 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-4ph29" Jan 10 07:13:02 crc kubenswrapper[4810]: I0110 07:13:02.259966 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd"] Jan 10 07:13:02 crc kubenswrapper[4810]: I0110 07:13:02.262880 4810 scope.go:117] "RemoveContainer" containerID="080dbd69fa1a9a36ff9676f9feb2d859768c53c7fb21b0e88b1283a4d7e63ef8" Jan 10 07:13:02 crc kubenswrapper[4810]: I0110 07:13:02.274813 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-kz7fd"] Jan 10 07:13:02 crc kubenswrapper[4810]: I0110 07:13:02.279821 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-4ph29"] Jan 10 07:13:02 crc kubenswrapper[4810]: I0110 07:13:02.283217 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-4ph29"] Jan 10 07:13:03 crc kubenswrapper[4810]: I0110 07:13:03.700399 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f6d9ded-28c8-45af-a726-a1165a822d3e" path="/var/lib/kubelet/pods/5f6d9ded-28c8-45af-a726-a1165a822d3e/volumes" Jan 10 07:13:03 crc kubenswrapper[4810]: I0110 07:13:03.702414 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0ce8179-abd0-4cbe-8bf9-9aea18680a9c" path="/var/lib/kubelet/pods/c0ce8179-abd0-4cbe-8bf9-9aea18680a9c/volumes" Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.255652 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9"] Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.255878 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" podUID="da111c6b-f079-4b6f-8bab-421f512d92f3" containerName="manager" containerID="cri-o://46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0" gracePeriod=10 Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.674283 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.821730 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-webhook-cert\") pod \"da111c6b-f079-4b6f-8bab-421f512d92f3\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.821786 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lw7j\" (UniqueName: \"kubernetes.io/projected/da111c6b-f079-4b6f-8bab-421f512d92f3-kube-api-access-9lw7j\") pod \"da111c6b-f079-4b6f-8bab-421f512d92f3\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.821830 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-apiservice-cert\") pod \"da111c6b-f079-4b6f-8bab-421f512d92f3\" (UID: \"da111c6b-f079-4b6f-8bab-421f512d92f3\") " Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.826689 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da111c6b-f079-4b6f-8bab-421f512d92f3-kube-api-access-9lw7j" (OuterVolumeSpecName: "kube-api-access-9lw7j") pod "da111c6b-f079-4b6f-8bab-421f512d92f3" (UID: "da111c6b-f079-4b6f-8bab-421f512d92f3"). InnerVolumeSpecName "kube-api-access-9lw7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.827669 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "da111c6b-f079-4b6f-8bab-421f512d92f3" (UID: "da111c6b-f079-4b6f-8bab-421f512d92f3"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.829320 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "da111c6b-f079-4b6f-8bab-421f512d92f3" (UID: "da111c6b-f079-4b6f-8bab-421f512d92f3"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.923715 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lw7j\" (UniqueName: \"kubernetes.io/projected/da111c6b-f079-4b6f-8bab-421f512d92f3-kube-api-access-9lw7j\") on node \"crc\" DevicePath \"\"" Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.923744 4810 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.923755 4810 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da111c6b-f079-4b6f-8bab-421f512d92f3-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.986293 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-r22zc"] Jan 10 07:13:05 crc kubenswrapper[4810]: I0110 07:13:05.986523 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-r22zc" podUID="f543013c-a720-406d-be72-86f2fd11d8a7" containerName="registry-server" containerID="cri-o://67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7" gracePeriod=30 Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.025156 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8"] Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.032293 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/1004c850a4f454c977fae3f045e87028eceebe49151bdca557a558b3b7dgbt8"] Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.262292 4810 generic.go:334] "Generic (PLEG): container finished" podID="da111c6b-f079-4b6f-8bab-421f512d92f3" containerID="46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0" exitCode=0 Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.262331 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" event={"ID":"da111c6b-f079-4b6f-8bab-421f512d92f3","Type":"ContainerDied","Data":"46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0"} Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.262360 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" event={"ID":"da111c6b-f079-4b6f-8bab-421f512d92f3","Type":"ContainerDied","Data":"4140bed8829354887ce3502a5f95228c28362f6b9ada942bfe7fc1dcf4ecf013"} Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.262377 4810 scope.go:117] "RemoveContainer" containerID="46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0" Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.262464 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9" Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.335749 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9"] Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.339937 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-68f4fb9846-lvtr9"] Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.340526 4810 scope.go:117] "RemoveContainer" containerID="46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0" Jan 10 07:13:06 crc kubenswrapper[4810]: E0110 07:13:06.341649 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0\": container with ID starting with 46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0 not found: ID does not exist" containerID="46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0" Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.341683 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0"} err="failed to get container status \"46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0\": rpc error: code = NotFound desc = could not find container \"46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0\": container with ID starting with 46e32b9a8d41197302d015b992c644f1497840d286b434fbd9bc63f430ca38f0 not found: ID does not exist" Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.802286 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz"] Jan 10 07:13:06 crc kubenswrapper[4810]: I0110 07:13:06.802482 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" podUID="c2cb1aea-83f9-4f30-8b56-f6726a9c8f81" containerName="manager" containerID="cri-o://a1c0bc8361d3b8247aa4cf1ad2df8ec6b6ffca072bed0d159642580cc3f78648" gracePeriod=10 Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.081606 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-2zn9d"] Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.083264 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-2zn9d" podUID="a2647002-dfbd-4b5d-9732-f0cd8daa21ca" containerName="registry-server" containerID="cri-o://c14c78ab0f678fa97eb46b6c13a0dbab374a424c9ec51511777bb51b9b048826" gracePeriod=30 Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.109569 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn"] Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.111302 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/1db583f94d1987eb997cb67ef7db92b9cbf8f229674982c4f1534d76a96xdvn"] Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.262176 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-r22zc" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.282220 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.289755 4810 generic.go:334] "Generic (PLEG): container finished" podID="c2cb1aea-83f9-4f30-8b56-f6726a9c8f81" containerID="a1c0bc8361d3b8247aa4cf1ad2df8ec6b6ffca072bed0d159642580cc3f78648" exitCode=0 Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.289824 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" event={"ID":"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81","Type":"ContainerDied","Data":"a1c0bc8361d3b8247aa4cf1ad2df8ec6b6ffca072bed0d159642580cc3f78648"} Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.289853 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.289858 4810 scope.go:117] "RemoveContainer" containerID="a1c0bc8361d3b8247aa4cf1ad2df8ec6b6ffca072bed0d159642580cc3f78648" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.294316 4810 generic.go:334] "Generic (PLEG): container finished" podID="f543013c-a720-406d-be72-86f2fd11d8a7" containerID="67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7" exitCode=0 Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.294352 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-r22zc" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.294473 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-r22zc" event={"ID":"f543013c-a720-406d-be72-86f2fd11d8a7","Type":"ContainerDied","Data":"67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7"} Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.294520 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-r22zc" event={"ID":"f543013c-a720-406d-be72-86f2fd11d8a7","Type":"ContainerDied","Data":"7f25147094ceaa7de8df9089d52796255a589c5e1b0bc28253cc1f69a0119041"} Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.295702 4810 generic.go:334] "Generic (PLEG): container finished" podID="a2647002-dfbd-4b5d-9732-f0cd8daa21ca" containerID="c14c78ab0f678fa97eb46b6c13a0dbab374a424c9ec51511777bb51b9b048826" exitCode=0 Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.295740 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-2zn9d" event={"ID":"a2647002-dfbd-4b5d-9732-f0cd8daa21ca","Type":"ContainerDied","Data":"c14c78ab0f678fa97eb46b6c13a0dbab374a424c9ec51511777bb51b9b048826"} Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.312634 4810 scope.go:117] "RemoveContainer" containerID="67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.329063 4810 scope.go:117] "RemoveContainer" containerID="67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7" Jan 10 07:13:07 crc kubenswrapper[4810]: E0110 07:13:07.329486 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7\": container with ID starting with 67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7 not found: ID does not exist" containerID="67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.329512 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7"} err="failed to get container status \"67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7\": rpc error: code = NotFound desc = could not find container \"67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7\": container with ID starting with 67f0d1bc649bc133c499155a64f1267f400b0c4daf68415ce40454a0d47de2a7 not found: ID does not exist" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.451349 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg87j\" (UniqueName: \"kubernetes.io/projected/f543013c-a720-406d-be72-86f2fd11d8a7-kube-api-access-pg87j\") pod \"f543013c-a720-406d-be72-86f2fd11d8a7\" (UID: \"f543013c-a720-406d-be72-86f2fd11d8a7\") " Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.451422 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-apiservice-cert\") pod \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.451517 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-webhook-cert\") pod \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.451560 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pdjc\" (UniqueName: \"kubernetes.io/projected/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-kube-api-access-9pdjc\") pod \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\" (UID: \"c2cb1aea-83f9-4f30-8b56-f6726a9c8f81\") " Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.457918 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-kube-api-access-9pdjc" (OuterVolumeSpecName: "kube-api-access-9pdjc") pod "c2cb1aea-83f9-4f30-8b56-f6726a9c8f81" (UID: "c2cb1aea-83f9-4f30-8b56-f6726a9c8f81"). InnerVolumeSpecName "kube-api-access-9pdjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.458214 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "c2cb1aea-83f9-4f30-8b56-f6726a9c8f81" (UID: "c2cb1aea-83f9-4f30-8b56-f6726a9c8f81"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.459846 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c2cb1aea-83f9-4f30-8b56-f6726a9c8f81" (UID: "c2cb1aea-83f9-4f30-8b56-f6726a9c8f81"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.460105 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f543013c-a720-406d-be72-86f2fd11d8a7-kube-api-access-pg87j" (OuterVolumeSpecName: "kube-api-access-pg87j") pod "f543013c-a720-406d-be72-86f2fd11d8a7" (UID: "f543013c-a720-406d-be72-86f2fd11d8a7"). InnerVolumeSpecName "kube-api-access-pg87j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.509568 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-2zn9d" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.553051 4810 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.553110 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pdjc\" (UniqueName: \"kubernetes.io/projected/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-kube-api-access-9pdjc\") on node \"crc\" DevicePath \"\"" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.553119 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg87j\" (UniqueName: \"kubernetes.io/projected/f543013c-a720-406d-be72-86f2fd11d8a7-kube-api-access-pg87j\") on node \"crc\" DevicePath \"\"" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.553131 4810 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.619679 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz"] Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.632816 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6946567f8-tf7nz"] Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.637107 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-r22zc"] Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.640364 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-r22zc"] Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.654448 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk7l7\" (UniqueName: \"kubernetes.io/projected/a2647002-dfbd-4b5d-9732-f0cd8daa21ca-kube-api-access-pk7l7\") pod \"a2647002-dfbd-4b5d-9732-f0cd8daa21ca\" (UID: \"a2647002-dfbd-4b5d-9732-f0cd8daa21ca\") " Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.657988 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2647002-dfbd-4b5d-9732-f0cd8daa21ca-kube-api-access-pk7l7" (OuterVolumeSpecName: "kube-api-access-pk7l7") pod "a2647002-dfbd-4b5d-9732-f0cd8daa21ca" (UID: "a2647002-dfbd-4b5d-9732-f0cd8daa21ca"). InnerVolumeSpecName "kube-api-access-pk7l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.699538 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e522d77-9796-44d0-9c0c-6b92d799d4e8" path="/var/lib/kubelet/pods/0e522d77-9796-44d0-9c0c-6b92d799d4e8/volumes" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.700232 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a08fc417-fb93-4e50-a805-d13909763e26" path="/var/lib/kubelet/pods/a08fc417-fb93-4e50-a805-d13909763e26/volumes" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.700830 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2cb1aea-83f9-4f30-8b56-f6726a9c8f81" path="/var/lib/kubelet/pods/c2cb1aea-83f9-4f30-8b56-f6726a9c8f81/volumes" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.701945 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da111c6b-f079-4b6f-8bab-421f512d92f3" path="/var/lib/kubelet/pods/da111c6b-f079-4b6f-8bab-421f512d92f3/volumes" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.702490 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f543013c-a720-406d-be72-86f2fd11d8a7" path="/var/lib/kubelet/pods/f543013c-a720-406d-be72-86f2fd11d8a7/volumes" Jan 10 07:13:07 crc kubenswrapper[4810]: I0110 07:13:07.755752 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk7l7\" (UniqueName: \"kubernetes.io/projected/a2647002-dfbd-4b5d-9732-f0cd8daa21ca-kube-api-access-pk7l7\") on node \"crc\" DevicePath \"\"" Jan 10 07:13:08 crc kubenswrapper[4810]: I0110 07:13:08.306166 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-2zn9d" event={"ID":"a2647002-dfbd-4b5d-9732-f0cd8daa21ca","Type":"ContainerDied","Data":"f24d738dcfccc887924fe7163f8f38067a045e2a3569e8f5218ab826fd16051c"} Jan 10 07:13:08 crc kubenswrapper[4810]: I0110 07:13:08.306955 4810 scope.go:117] "RemoveContainer" containerID="c14c78ab0f678fa97eb46b6c13a0dbab374a424c9ec51511777bb51b9b048826" Jan 10 07:13:08 crc kubenswrapper[4810]: I0110 07:13:08.306448 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-2zn9d" Jan 10 07:13:08 crc kubenswrapper[4810]: I0110 07:13:08.324935 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-2zn9d"] Jan 10 07:13:08 crc kubenswrapper[4810]: I0110 07:13:08.328251 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-2zn9d"] Jan 10 07:13:09 crc kubenswrapper[4810]: I0110 07:13:09.699008 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2647002-dfbd-4b5d-9732-f0cd8daa21ca" path="/var/lib/kubelet/pods/a2647002-dfbd-4b5d-9732-f0cd8daa21ca/volumes" Jan 10 07:13:20 crc kubenswrapper[4810]: I0110 07:13:20.883079 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:13:20 crc kubenswrapper[4810]: I0110 07:13:20.883577 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.318499 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x2ftf/must-gather-m5n5r"] Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319030 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0387ec-3523-41da-b323-7249eb242b4d" containerName="galera" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319046 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0387ec-3523-41da-b323-7249eb242b4d" containerName="galera" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319063 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f543013c-a720-406d-be72-86f2fd11d8a7" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319071 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="f543013c-a720-406d-be72-86f2fd11d8a7" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319085 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1067de2e-5591-420e-a068-a3279fb17d44" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319093 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1067de2e-5591-420e-a068-a3279fb17d44" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319107 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692848fd-4cf4-401e-819f-c14ac900efea" containerName="mysql-bootstrap" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319115 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="692848fd-4cf4-401e-819f-c14ac900efea" containerName="mysql-bootstrap" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319127 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e" containerName="mariadb-account-delete" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319135 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e" containerName="mariadb-account-delete" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319147 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" containerName="galera" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319155 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" containerName="galera" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319164 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da111c6b-f079-4b6f-8bab-421f512d92f3" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319171 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="da111c6b-f079-4b6f-8bab-421f512d92f3" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319182 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2cb1aea-83f9-4f30-8b56-f6726a9c8f81" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319190 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2cb1aea-83f9-4f30-8b56-f6726a9c8f81" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319223 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2647002-dfbd-4b5d-9732-f0cd8daa21ca" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319232 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2647002-dfbd-4b5d-9732-f0cd8daa21ca" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319247 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31586912-6862-4b77-892d-76af90a5b22a" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319254 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="31586912-6862-4b77-892d-76af90a5b22a" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319265 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab3773c-d14d-45f3-8aab-fb5b3eb075ee" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319271 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab3773c-d14d-45f3-8aab-fb5b3eb075ee" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319283 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c176cb-a9d8-49ed-9d35-78a2975d9dd6" containerName="setup-container" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319293 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c176cb-a9d8-49ed-9d35-78a2975d9dd6" containerName="setup-container" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319301 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e7172d7-ebe7-4a16-878f-a82b25cdecbb" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319308 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e7172d7-ebe7-4a16-878f-a82b25cdecbb" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319319 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0ce8179-abd0-4cbe-8bf9-9aea18680a9c" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319327 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0ce8179-abd0-4cbe-8bf9-9aea18680a9c" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319338 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6d9ded-28c8-45af-a726-a1165a822d3e" containerName="operator" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319345 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6d9ded-28c8-45af-a726-a1165a822d3e" containerName="operator" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319355 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb0387ec-3523-41da-b323-7249eb242b4d" containerName="mysql-bootstrap" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319364 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb0387ec-3523-41da-b323-7249eb242b4d" containerName="mysql-bootstrap" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319377 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c176cb-a9d8-49ed-9d35-78a2975d9dd6" containerName="rabbitmq" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319385 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c176cb-a9d8-49ed-9d35-78a2975d9dd6" containerName="rabbitmq" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319395 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3bfd13-4157-4289-981b-55602232df01" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319403 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3bfd13-4157-4289-981b-55602232df01" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319413 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692848fd-4cf4-401e-819f-c14ac900efea" containerName="galera" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319420 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="692848fd-4cf4-401e-819f-c14ac900efea" containerName="galera" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319428 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ab9e5c2-daee-4d73-b55e-df866fba1922" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319437 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ab9e5c2-daee-4d73-b55e-df866fba1922" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319449 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" containerName="mysql-bootstrap" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319458 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" containerName="mysql-bootstrap" Jan 10 07:13:22 crc kubenswrapper[4810]: E0110 07:13:22.319467 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b41eafc-06e2-4d4c-8e1c-20220c76be9f" containerName="memcached" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319475 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b41eafc-06e2-4d4c-8e1c-20220c76be9f" containerName="memcached" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319602 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="da111c6b-f079-4b6f-8bab-421f512d92f3" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319616 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2647002-dfbd-4b5d-9732-f0cd8daa21ca" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319628 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b41eafc-06e2-4d4c-8e1c-20220c76be9f" containerName="memcached" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319637 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab3773c-d14d-45f3-8aab-fb5b3eb075ee" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319649 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2cb1aea-83f9-4f30-8b56-f6726a9c8f81" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319658 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3bfd13-4157-4289-981b-55602232df01" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319667 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c176cb-a9d8-49ed-9d35-78a2975d9dd6" containerName="rabbitmq" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319675 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0ce8179-abd0-4cbe-8bf9-9aea18680a9c" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319685 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="31586912-6862-4b77-892d-76af90a5b22a" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319696 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="f543013c-a720-406d-be72-86f2fd11d8a7" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319703 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ab9e5c2-daee-4d73-b55e-df866fba1922" containerName="registry-server" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319713 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="06676b15-3e4e-4fa8-bfe6-3d95ad522c31" containerName="galera" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319724 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e7172d7-ebe7-4a16-878f-a82b25cdecbb" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319736 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6d9ded-28c8-45af-a726-a1165a822d3e" containerName="operator" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319748 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1067de2e-5591-420e-a068-a3279fb17d44" containerName="manager" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319757 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd4d0e9-a0b2-42bc-be65-9f3c891c9c3e" containerName="mariadb-account-delete" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319767 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="692848fd-4cf4-401e-819f-c14ac900efea" containerName="galera" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.319776 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb0387ec-3523-41da-b323-7249eb242b4d" containerName="galera" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.320481 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x2ftf/must-gather-m5n5r" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.323168 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x2ftf"/"kube-root-ca.crt" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.323416 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x2ftf"/"openshift-service-ca.crt" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.328210 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x2ftf/must-gather-m5n5r"] Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.482838 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtpfs\" (UniqueName: \"kubernetes.io/projected/1ece9af6-7d99-4335-adcc-28378b4222cb-kube-api-access-xtpfs\") pod \"must-gather-m5n5r\" (UID: \"1ece9af6-7d99-4335-adcc-28378b4222cb\") " pod="openshift-must-gather-x2ftf/must-gather-m5n5r" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.482902 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ece9af6-7d99-4335-adcc-28378b4222cb-must-gather-output\") pod \"must-gather-m5n5r\" (UID: \"1ece9af6-7d99-4335-adcc-28378b4222cb\") " pod="openshift-must-gather-x2ftf/must-gather-m5n5r" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.583598 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtpfs\" (UniqueName: \"kubernetes.io/projected/1ece9af6-7d99-4335-adcc-28378b4222cb-kube-api-access-xtpfs\") pod \"must-gather-m5n5r\" (UID: \"1ece9af6-7d99-4335-adcc-28378b4222cb\") " pod="openshift-must-gather-x2ftf/must-gather-m5n5r" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.583644 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ece9af6-7d99-4335-adcc-28378b4222cb-must-gather-output\") pod \"must-gather-m5n5r\" (UID: \"1ece9af6-7d99-4335-adcc-28378b4222cb\") " pod="openshift-must-gather-x2ftf/must-gather-m5n5r" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.584107 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ece9af6-7d99-4335-adcc-28378b4222cb-must-gather-output\") pod \"must-gather-m5n5r\" (UID: \"1ece9af6-7d99-4335-adcc-28378b4222cb\") " pod="openshift-must-gather-x2ftf/must-gather-m5n5r" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.610019 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtpfs\" (UniqueName: \"kubernetes.io/projected/1ece9af6-7d99-4335-adcc-28378b4222cb-kube-api-access-xtpfs\") pod \"must-gather-m5n5r\" (UID: \"1ece9af6-7d99-4335-adcc-28378b4222cb\") " pod="openshift-must-gather-x2ftf/must-gather-m5n5r" Jan 10 07:13:22 crc kubenswrapper[4810]: I0110 07:13:22.641424 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x2ftf/must-gather-m5n5r" Jan 10 07:13:23 crc kubenswrapper[4810]: I0110 07:13:23.052449 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x2ftf/must-gather-m5n5r"] Jan 10 07:13:23 crc kubenswrapper[4810]: I0110 07:13:23.435566 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x2ftf/must-gather-m5n5r" event={"ID":"1ece9af6-7d99-4335-adcc-28378b4222cb","Type":"ContainerStarted","Data":"eec21274ebfd0456ceabeb9cbab6e1ea9007b965dc378bfb8a4481af411d298a"} Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.506841 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x2ftf/must-gather-m5n5r" event={"ID":"1ece9af6-7d99-4335-adcc-28378b4222cb","Type":"ContainerStarted","Data":"c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6"} Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.507295 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x2ftf/must-gather-m5n5r" event={"ID":"1ece9af6-7d99-4335-adcc-28378b4222cb","Type":"ContainerStarted","Data":"85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99"} Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.523898 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x2ftf/must-gather-m5n5r" podStartSLOduration=1.976385304 podStartE2EDuration="10.52387554s" podCreationTimestamp="2026-01-10 07:13:22 +0000 UTC" firstStartedPulling="2026-01-10 07:13:23.062377951 +0000 UTC m=+1631.677870864" lastFinishedPulling="2026-01-10 07:13:31.609868207 +0000 UTC m=+1640.225361100" observedRunningTime="2026-01-10 07:13:32.520331066 +0000 UTC m=+1641.135823989" watchObservedRunningTime="2026-01-10 07:13:32.52387554 +0000 UTC m=+1641.139368423" Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.802924 4810 scope.go:117] "RemoveContainer" containerID="16ca45ab3bd2c20a8365b9833e6eb04342f8da31b5113f6b383d6694ce0fb21e" Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.823958 4810 scope.go:117] "RemoveContainer" containerID="eef1aca1b73720c3bdf620a5345cdcc49f54062522f0a7537096b8d4154e6cdf" Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.851793 4810 scope.go:117] "RemoveContainer" containerID="88641b5063d73adc388c7dc0924d1677e02fc2e6693fdb7d1ce3f48c056b15c6" Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.871184 4810 scope.go:117] "RemoveContainer" containerID="0372e377af72e00e0c75e870f1e06442614ed60cf294421362c2efa404038264" Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.903957 4810 scope.go:117] "RemoveContainer" containerID="ae38555bf51d6484c475bb991f05081f57c6cc3492e808531c0c8807b09648da" Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.925564 4810 scope.go:117] "RemoveContainer" containerID="9b3883789dce067f2f463848a5772f35118a8e78db77f8c090501bae5c9fcb89" Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.949905 4810 scope.go:117] "RemoveContainer" containerID="e3b35dd01b0c205ec9ee0de2dae3c2f168f1e30934e2da5573a21a71af001fd8" Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.964855 4810 scope.go:117] "RemoveContainer" containerID="9604d779e2b0ad8c4a1a3a5a75a2c67b5feb3e69dd43eb119786582bced25621" Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.979261 4810 scope.go:117] "RemoveContainer" containerID="52f3902518e37fcfb66f76ac8e0b74a03c8abf92c64355e50ff92a6177b5f796" Jan 10 07:13:32 crc kubenswrapper[4810]: I0110 07:13:32.995236 4810 scope.go:117] "RemoveContainer" containerID="ad71bf32f3aeaf27b90f3311bd0d136e37d8220d97b44b8e79479512d36013f5" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.010097 4810 scope.go:117] "RemoveContainer" containerID="363175a0a8f2b31f8c3f22a0e7e5bf0dc9d319a75d067f36e806b5529389d136" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.045995 4810 scope.go:117] "RemoveContainer" containerID="c2caa4d5df530493c280979753936e43a9147f0a88334355300e92c21d249e68" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.065391 4810 scope.go:117] "RemoveContainer" containerID="9c747283e409513af3b7c0f43c232aacaefe07e7d75b8d9de038f14bbfc8adf1" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.083601 4810 scope.go:117] "RemoveContainer" containerID="bba019c7ea9d8f796435fdd27e6cb428e7ccf13234225b2876ac51ebfe2e0c03" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.101055 4810 scope.go:117] "RemoveContainer" containerID="f979642fefe549adb5beed83d78e729c7a655e570f9c613a7acb02d485a72c3e" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.121719 4810 scope.go:117] "RemoveContainer" containerID="963484e64b9d0a7cddbb6c37324579efe31f852300e7dcf6846254d8c6480cfe" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.140277 4810 scope.go:117] "RemoveContainer" containerID="05975c5ea8093b17636b4b2fce9a31166cc2105b0b1b651e21dcd44845584d1e" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.160130 4810 scope.go:117] "RemoveContainer" containerID="3f9337297a17f474a93fe102b86571b82440edbaff3fd5514a7fdbfd0d2f447b" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.179079 4810 scope.go:117] "RemoveContainer" containerID="2cf2be50ae86bac9d79e09750fd51de373ac70cf5c1e9c18b8e50a69f7fa6358" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.199130 4810 scope.go:117] "RemoveContainer" containerID="98290f3347a44234bcf9800c5d3cc6af1020f0d56a2a4922cc3110b02f8ca929" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.221382 4810 scope.go:117] "RemoveContainer" containerID="bf091f6245bcbc9fb4c30afd42548815f3e0338d7def9e77b2768c1ee3ecdd49" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.238932 4810 scope.go:117] "RemoveContainer" containerID="d4c9794afdb002fea55f6652ae0a8ddcd6b5685a9b41bb05282cd914b6aca3e0" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.258566 4810 scope.go:117] "RemoveContainer" containerID="90b750ac09094a02f95ec66a20e1a6bf1268b975cc2fd848c5fb28a7c017abc0" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.272704 4810 scope.go:117] "RemoveContainer" containerID="adda8a31813683e6300962ca84d6875b471352c3d5f2a8e4e0869e65c7146faa" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.299866 4810 scope.go:117] "RemoveContainer" containerID="d8775154c2e29f0bc5e96edfcc5fb97b8bc79e716cccda84bb18de1f52da1a61" Jan 10 07:13:33 crc kubenswrapper[4810]: I0110 07:13:33.318869 4810 scope.go:117] "RemoveContainer" containerID="172764c6148fb4b814d4026834e102edecd8ac340089e85119824db36e120b7f" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.182842 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tjqxt"] Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.184507 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.206046 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjqxt"] Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.287414 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-catalog-content\") pod \"certified-operators-tjqxt\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.287574 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-utilities\") pod \"certified-operators-tjqxt\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.287636 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5wsv\" (UniqueName: \"kubernetes.io/projected/396eda04-53ae-4ea3-8204-da08e1b098ce-kube-api-access-z5wsv\") pod \"certified-operators-tjqxt\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.388159 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-utilities\") pod \"certified-operators-tjqxt\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.388265 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5wsv\" (UniqueName: \"kubernetes.io/projected/396eda04-53ae-4ea3-8204-da08e1b098ce-kube-api-access-z5wsv\") pod \"certified-operators-tjqxt\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.388303 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-catalog-content\") pod \"certified-operators-tjqxt\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.388753 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-catalog-content\") pod \"certified-operators-tjqxt\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.388973 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-utilities\") pod \"certified-operators-tjqxt\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.414189 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5wsv\" (UniqueName: \"kubernetes.io/projected/396eda04-53ae-4ea3-8204-da08e1b098ce-kube-api-access-z5wsv\") pod \"certified-operators-tjqxt\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.511887 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:36 crc kubenswrapper[4810]: I0110 07:13:36.930070 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjqxt"] Jan 10 07:13:37 crc kubenswrapper[4810]: I0110 07:13:37.554333 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjqxt" event={"ID":"396eda04-53ae-4ea3-8204-da08e1b098ce","Type":"ContainerStarted","Data":"d1fcc7f9a32136df8e3d70b50f9c65f3db1d07b6bf77e038ea7a243eab235524"} Jan 10 07:13:42 crc kubenswrapper[4810]: I0110 07:13:42.588224 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjqxt" event={"ID":"396eda04-53ae-4ea3-8204-da08e1b098ce","Type":"ContainerStarted","Data":"69ec8978a125891b9dfd39ba1cf4f94e87209175344d113e8e308206be7c7de0"} Jan 10 07:13:44 crc kubenswrapper[4810]: I0110 07:13:44.601720 4810 generic.go:334] "Generic (PLEG): container finished" podID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerID="69ec8978a125891b9dfd39ba1cf4f94e87209175344d113e8e308206be7c7de0" exitCode=0 Jan 10 07:13:44 crc kubenswrapper[4810]: I0110 07:13:44.601796 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjqxt" event={"ID":"396eda04-53ae-4ea3-8204-da08e1b098ce","Type":"ContainerDied","Data":"69ec8978a125891b9dfd39ba1cf4f94e87209175344d113e8e308206be7c7de0"} Jan 10 07:13:46 crc kubenswrapper[4810]: I0110 07:13:46.618047 4810 generic.go:334] "Generic (PLEG): container finished" podID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerID="971e47188c278f8ebe34ce13c590b16f095a488be307899892bd5744610a0c63" exitCode=0 Jan 10 07:13:46 crc kubenswrapper[4810]: I0110 07:13:46.618118 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjqxt" event={"ID":"396eda04-53ae-4ea3-8204-da08e1b098ce","Type":"ContainerDied","Data":"971e47188c278f8ebe34ce13c590b16f095a488be307899892bd5744610a0c63"} Jan 10 07:13:49 crc kubenswrapper[4810]: I0110 07:13:49.653769 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjqxt" event={"ID":"396eda04-53ae-4ea3-8204-da08e1b098ce","Type":"ContainerStarted","Data":"77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f"} Jan 10 07:13:50 crc kubenswrapper[4810]: I0110 07:13:50.678835 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tjqxt" podStartSLOduration=10.266018028 podStartE2EDuration="14.678818014s" podCreationTimestamp="2026-01-10 07:13:36 +0000 UTC" firstStartedPulling="2026-01-10 07:13:44.603841375 +0000 UTC m=+1653.219334258" lastFinishedPulling="2026-01-10 07:13:49.016641361 +0000 UTC m=+1657.632134244" observedRunningTime="2026-01-10 07:13:50.675620096 +0000 UTC m=+1659.291113019" watchObservedRunningTime="2026-01-10 07:13:50.678818014 +0000 UTC m=+1659.294310897" Jan 10 07:13:50 crc kubenswrapper[4810]: I0110 07:13:50.883127 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:13:50 crc kubenswrapper[4810]: I0110 07:13:50.883425 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:13:56 crc kubenswrapper[4810]: I0110 07:13:56.512976 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:56 crc kubenswrapper[4810]: I0110 07:13:56.513419 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:56 crc kubenswrapper[4810]: I0110 07:13:56.553773 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:56 crc kubenswrapper[4810]: I0110 07:13:56.752370 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:13:56 crc kubenswrapper[4810]: I0110 07:13:56.809145 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjqxt"] Jan 10 07:13:58 crc kubenswrapper[4810]: I0110 07:13:58.710851 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tjqxt" podUID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerName="registry-server" containerID="cri-o://77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f" gracePeriod=2 Jan 10 07:14:06 crc kubenswrapper[4810]: E0110 07:14:06.513214 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f is running failed: container process not found" containerID="77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 07:14:06 crc kubenswrapper[4810]: E0110 07:14:06.514315 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f is running failed: container process not found" containerID="77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 07:14:06 crc kubenswrapper[4810]: E0110 07:14:06.514655 4810 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f is running failed: container process not found" containerID="77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f" cmd=["grpc_health_probe","-addr=:50051"] Jan 10 07:14:06 crc kubenswrapper[4810]: E0110 07:14:06.514716 4810 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-tjqxt" podUID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerName="registry-server" Jan 10 07:14:08 crc kubenswrapper[4810]: I0110 07:14:08.954873 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjqxt_396eda04-53ae-4ea3-8204-da08e1b098ce/registry-server/0.log" Jan 10 07:14:08 crc kubenswrapper[4810]: I0110 07:14:08.959694 4810 generic.go:334] "Generic (PLEG): container finished" podID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerID="77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f" exitCode=-1 Jan 10 07:14:08 crc kubenswrapper[4810]: I0110 07:14:08.959746 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjqxt" event={"ID":"396eda04-53ae-4ea3-8204-da08e1b098ce","Type":"ContainerDied","Data":"77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f"} Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.479222 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.611536 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-catalog-content\") pod \"396eda04-53ae-4ea3-8204-da08e1b098ce\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.611584 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5wsv\" (UniqueName: \"kubernetes.io/projected/396eda04-53ae-4ea3-8204-da08e1b098ce-kube-api-access-z5wsv\") pod \"396eda04-53ae-4ea3-8204-da08e1b098ce\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.611603 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-utilities\") pod \"396eda04-53ae-4ea3-8204-da08e1b098ce\" (UID: \"396eda04-53ae-4ea3-8204-da08e1b098ce\") " Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.612693 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-utilities" (OuterVolumeSpecName: "utilities") pod "396eda04-53ae-4ea3-8204-da08e1b098ce" (UID: "396eda04-53ae-4ea3-8204-da08e1b098ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.618656 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/396eda04-53ae-4ea3-8204-da08e1b098ce-kube-api-access-z5wsv" (OuterVolumeSpecName: "kube-api-access-z5wsv") pod "396eda04-53ae-4ea3-8204-da08e1b098ce" (UID: "396eda04-53ae-4ea3-8204-da08e1b098ce"). InnerVolumeSpecName "kube-api-access-z5wsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.671176 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "396eda04-53ae-4ea3-8204-da08e1b098ce" (UID: "396eda04-53ae-4ea3-8204-da08e1b098ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.713351 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.713387 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5wsv\" (UniqueName: \"kubernetes.io/projected/396eda04-53ae-4ea3-8204-da08e1b098ce-kube-api-access-z5wsv\") on node \"crc\" DevicePath \"\"" Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.713401 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/396eda04-53ae-4ea3-8204-da08e1b098ce-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.966671 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjqxt" event={"ID":"396eda04-53ae-4ea3-8204-da08e1b098ce","Type":"ContainerDied","Data":"d1fcc7f9a32136df8e3d70b50f9c65f3db1d07b6bf77e038ea7a243eab235524"} Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.966729 4810 scope.go:117] "RemoveContainer" containerID="77a59aa2f75b28bb227b9a675ff971792f72fdfc503bc8aaefb8acd1a908d43f" Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.966743 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjqxt" Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.989487 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjqxt"] Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.993908 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tjqxt"] Jan 10 07:14:09 crc kubenswrapper[4810]: I0110 07:14:09.999917 4810 scope.go:117] "RemoveContainer" containerID="971e47188c278f8ebe34ce13c590b16f095a488be307899892bd5744610a0c63" Jan 10 07:14:10 crc kubenswrapper[4810]: I0110 07:14:10.025656 4810 scope.go:117] "RemoveContainer" containerID="69ec8978a125891b9dfd39ba1cf4f94e87209175344d113e8e308206be7c7de0" Jan 10 07:14:11 crc kubenswrapper[4810]: I0110 07:14:11.701418 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="396eda04-53ae-4ea3-8204-da08e1b098ce" path="/var/lib/kubelet/pods/396eda04-53ae-4ea3-8204-da08e1b098ce/volumes" Jan 10 07:14:19 crc kubenswrapper[4810]: I0110 07:14:19.195845 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-r9mqp_49455ed9-e8ab-44c8-9075-2ffbbebe36a8/control-plane-machine-set-operator/0.log" Jan 10 07:14:19 crc kubenswrapper[4810]: I0110 07:14:19.393421 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t88kw_85c62a01-c818-47b4-92fd-bcd87d8218a8/kube-rbac-proxy/0.log" Jan 10 07:14:19 crc kubenswrapper[4810]: I0110 07:14:19.444909 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t88kw_85c62a01-c818-47b4-92fd-bcd87d8218a8/machine-api-operator/0.log" Jan 10 07:14:20 crc kubenswrapper[4810]: I0110 07:14:20.883428 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:14:20 crc kubenswrapper[4810]: I0110 07:14:20.883539 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:14:20 crc kubenswrapper[4810]: I0110 07:14:20.883596 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 07:14:20 crc kubenswrapper[4810]: I0110 07:14:20.884368 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910"} pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 10 07:14:20 crc kubenswrapper[4810]: I0110 07:14:20.884471 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" containerID="cri-o://09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" gracePeriod=600 Jan 10 07:14:23 crc kubenswrapper[4810]: E0110 07:14:23.700746 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:14:23 crc kubenswrapper[4810]: I0110 07:14:23.731458 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zlz2p"] Jan 10 07:14:23 crc kubenswrapper[4810]: E0110 07:14:23.731671 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerName="extract-content" Jan 10 07:14:23 crc kubenswrapper[4810]: I0110 07:14:23.731682 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerName="extract-content" Jan 10 07:14:23 crc kubenswrapper[4810]: E0110 07:14:23.731709 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerName="registry-server" Jan 10 07:14:23 crc kubenswrapper[4810]: I0110 07:14:23.731714 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerName="registry-server" Jan 10 07:14:23 crc kubenswrapper[4810]: E0110 07:14:23.731734 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerName="extract-utilities" Jan 10 07:14:23 crc kubenswrapper[4810]: I0110 07:14:23.731740 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerName="extract-utilities" Jan 10 07:14:23 crc kubenswrapper[4810]: I0110 07:14:23.731830 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="396eda04-53ae-4ea3-8204-da08e1b098ce" containerName="registry-server" Jan 10 07:14:23 crc kubenswrapper[4810]: I0110 07:14:23.732570 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:23 crc kubenswrapper[4810]: I0110 07:14:23.749747 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zlz2p"] Jan 10 07:14:23 crc kubenswrapper[4810]: I0110 07:14:23.898472 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-catalog-content\") pod \"redhat-operators-zlz2p\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:23 crc kubenswrapper[4810]: I0110 07:14:23.899096 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlppg\" (UniqueName: \"kubernetes.io/projected/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-kube-api-access-qlppg\") pod \"redhat-operators-zlz2p\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:23 crc kubenswrapper[4810]: I0110 07:14:23.899414 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-utilities\") pod \"redhat-operators-zlz2p\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.000866 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-catalog-content\") pod \"redhat-operators-zlz2p\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.000920 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlppg\" (UniqueName: \"kubernetes.io/projected/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-kube-api-access-qlppg\") pod \"redhat-operators-zlz2p\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.000951 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-utilities\") pod \"redhat-operators-zlz2p\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.001468 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-catalog-content\") pod \"redhat-operators-zlz2p\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.001515 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-utilities\") pod \"redhat-operators-zlz2p\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.021965 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlppg\" (UniqueName: \"kubernetes.io/projected/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-kube-api-access-qlppg\") pod \"redhat-operators-zlz2p\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.047003 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.053018 4810 generic.go:334] "Generic (PLEG): container finished" podID="b5b79429-9259-412f-bab8-27865ab7029b" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" exitCode=0 Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.053062 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerDied","Data":"09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910"} Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.053100 4810 scope.go:117] "RemoveContainer" containerID="a1621bdd26d8919cc52b7877444181bced5c9989faf9461e3b82b9a71cd444d4" Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.053781 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:14:24 crc kubenswrapper[4810]: E0110 07:14:24.053989 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:14:24 crc kubenswrapper[4810]: I0110 07:14:24.356714 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zlz2p"] Jan 10 07:14:24 crc kubenswrapper[4810]: W0110 07:14:24.366642 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod548e27ac_ac4b_4ba1_9507_f5d0e3063bc2.slice/crio-7ddefd35ada9b6c82a933f634ff2828143375d34775ede992a228c1838250b82 WatchSource:0}: Error finding container 7ddefd35ada9b6c82a933f634ff2828143375d34775ede992a228c1838250b82: Status 404 returned error can't find the container with id 7ddefd35ada9b6c82a933f634ff2828143375d34775ede992a228c1838250b82 Jan 10 07:14:25 crc kubenswrapper[4810]: I0110 07:14:25.059687 4810 generic.go:334] "Generic (PLEG): container finished" podID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" containerID="7e576efd869ebe649b6501178639e2785c6af0bcd3496a9cca47c133a2ae5671" exitCode=0 Jan 10 07:14:25 crc kubenswrapper[4810]: I0110 07:14:25.059741 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlz2p" event={"ID":"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2","Type":"ContainerDied","Data":"7e576efd869ebe649b6501178639e2785c6af0bcd3496a9cca47c133a2ae5671"} Jan 10 07:14:25 crc kubenswrapper[4810]: I0110 07:14:25.059766 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlz2p" event={"ID":"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2","Type":"ContainerStarted","Data":"7ddefd35ada9b6c82a933f634ff2828143375d34775ede992a228c1838250b82"} Jan 10 07:14:27 crc kubenswrapper[4810]: I0110 07:14:27.075285 4810 generic.go:334] "Generic (PLEG): container finished" podID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" containerID="8b94a03c56bf0c50a3a3381abb09345901d6a9c39f268598ac43193fe9086505" exitCode=0 Jan 10 07:14:27 crc kubenswrapper[4810]: I0110 07:14:27.075385 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlz2p" event={"ID":"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2","Type":"ContainerDied","Data":"8b94a03c56bf0c50a3a3381abb09345901d6a9c39f268598ac43193fe9086505"} Jan 10 07:14:28 crc kubenswrapper[4810]: I0110 07:14:28.082711 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlz2p" event={"ID":"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2","Type":"ContainerStarted","Data":"ad660a6a7d8565a4850d1bc2204633ae50ff4541600b42e5f3435a4a2ef5728d"} Jan 10 07:14:28 crc kubenswrapper[4810]: I0110 07:14:28.105825 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zlz2p" podStartSLOduration=2.502807334 podStartE2EDuration="5.10580584s" podCreationTimestamp="2026-01-10 07:14:23 +0000 UTC" firstStartedPulling="2026-01-10 07:14:25.061447826 +0000 UTC m=+1693.676940709" lastFinishedPulling="2026-01-10 07:14:27.664446322 +0000 UTC m=+1696.279939215" observedRunningTime="2026-01-10 07:14:28.103504865 +0000 UTC m=+1696.718997748" watchObservedRunningTime="2026-01-10 07:14:28.10580584 +0000 UTC m=+1696.721298723" Jan 10 07:14:33 crc kubenswrapper[4810]: I0110 07:14:33.648923 4810 scope.go:117] "RemoveContainer" containerID="7865c656949217c626d3c92047ddf9ed9fc4984d4128794f19fff4f8b9cfda76" Jan 10 07:14:33 crc kubenswrapper[4810]: I0110 07:14:33.669373 4810 scope.go:117] "RemoveContainer" containerID="4d3c07fade99781031f480acea8bd415266e89ece0282f29137d152599574a6b" Jan 10 07:14:33 crc kubenswrapper[4810]: I0110 07:14:33.692383 4810 scope.go:117] "RemoveContainer" containerID="8b3b2744a59e7d7c7dc79f450b219e30c086dc5748c7c5e18476101ceb1fd3f6" Jan 10 07:14:34 crc kubenswrapper[4810]: I0110 07:14:34.047979 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:34 crc kubenswrapper[4810]: I0110 07:14:34.048036 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:34 crc kubenswrapper[4810]: I0110 07:14:34.091749 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:34 crc kubenswrapper[4810]: I0110 07:14:34.195372 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:34 crc kubenswrapper[4810]: I0110 07:14:34.329405 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zlz2p"] Jan 10 07:14:36 crc kubenswrapper[4810]: I0110 07:14:36.144377 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zlz2p" podUID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" containerName="registry-server" containerID="cri-o://ad660a6a7d8565a4850d1bc2204633ae50ff4541600b42e5f3435a4a2ef5728d" gracePeriod=2 Jan 10 07:14:36 crc kubenswrapper[4810]: I0110 07:14:36.693028 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:14:36 crc kubenswrapper[4810]: E0110 07:14:36.693414 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:14:40 crc kubenswrapper[4810]: I0110 07:14:40.173934 4810 generic.go:334] "Generic (PLEG): container finished" podID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" containerID="ad660a6a7d8565a4850d1bc2204633ae50ff4541600b42e5f3435a4a2ef5728d" exitCode=0 Jan 10 07:14:40 crc kubenswrapper[4810]: I0110 07:14:40.174031 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlz2p" event={"ID":"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2","Type":"ContainerDied","Data":"ad660a6a7d8565a4850d1bc2204633ae50ff4541600b42e5f3435a4a2ef5728d"} Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.086057 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.184041 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zlz2p" event={"ID":"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2","Type":"ContainerDied","Data":"7ddefd35ada9b6c82a933f634ff2828143375d34775ede992a228c1838250b82"} Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.184124 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zlz2p" Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.184139 4810 scope.go:117] "RemoveContainer" containerID="ad660a6a7d8565a4850d1bc2204633ae50ff4541600b42e5f3435a4a2ef5728d" Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.200591 4810 scope.go:117] "RemoveContainer" containerID="8b94a03c56bf0c50a3a3381abb09345901d6a9c39f268598ac43193fe9086505" Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.228575 4810 scope.go:117] "RemoveContainer" containerID="7e576efd869ebe649b6501178639e2785c6af0bcd3496a9cca47c133a2ae5671" Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.245798 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-utilities\") pod \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.245896 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-catalog-content\") pod \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.245974 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlppg\" (UniqueName: \"kubernetes.io/projected/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-kube-api-access-qlppg\") pod \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\" (UID: \"548e27ac-ac4b-4ba1-9507-f5d0e3063bc2\") " Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.246891 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-utilities" (OuterVolumeSpecName: "utilities") pod "548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" (UID: "548e27ac-ac4b-4ba1-9507-f5d0e3063bc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.257057 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-kube-api-access-qlppg" (OuterVolumeSpecName: "kube-api-access-qlppg") pod "548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" (UID: "548e27ac-ac4b-4ba1-9507-f5d0e3063bc2"). InnerVolumeSpecName "kube-api-access-qlppg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.348318 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.348366 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlppg\" (UniqueName: \"kubernetes.io/projected/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-kube-api-access-qlppg\") on node \"crc\" DevicePath \"\"" Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.378489 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" (UID: "548e27ac-ac4b-4ba1-9507-f5d0e3063bc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.449904 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.511221 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zlz2p"] Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.516115 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zlz2p"] Jan 10 07:14:41 crc kubenswrapper[4810]: I0110 07:14:41.701762 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" path="/var/lib/kubelet/pods/548e27ac-ac4b-4ba1-9507-f5d0e3063bc2/volumes" Jan 10 07:14:46 crc kubenswrapper[4810]: I0110 07:14:46.329596 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-qv4c2_7a4eb8a0-6ca1-4dca-a9a5-37a00569037d/kube-rbac-proxy/0.log" Jan 10 07:14:46 crc kubenswrapper[4810]: I0110 07:14:46.426091 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-qv4c2_7a4eb8a0-6ca1-4dca-a9a5-37a00569037d/controller/0.log" Jan 10 07:14:46 crc kubenswrapper[4810]: I0110 07:14:46.588508 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-frr-files/0.log" Jan 10 07:14:46 crc kubenswrapper[4810]: I0110 07:14:46.705512 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-frr-files/0.log" Jan 10 07:14:46 crc kubenswrapper[4810]: I0110 07:14:46.747019 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-metrics/0.log" Jan 10 07:14:46 crc kubenswrapper[4810]: I0110 07:14:46.757700 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-reloader/0.log" Jan 10 07:14:46 crc kubenswrapper[4810]: I0110 07:14:46.757819 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-reloader/0.log" Jan 10 07:14:46 crc kubenswrapper[4810]: I0110 07:14:46.941741 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-frr-files/0.log" Jan 10 07:14:46 crc kubenswrapper[4810]: I0110 07:14:46.954003 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-reloader/0.log" Jan 10 07:14:46 crc kubenswrapper[4810]: I0110 07:14:46.988403 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-metrics/0.log" Jan 10 07:14:46 crc kubenswrapper[4810]: I0110 07:14:46.988494 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-metrics/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.121621 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-frr-files/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.139822 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-reloader/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.177434 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/controller/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.187978 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-metrics/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.321567 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/frr-metrics/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.330918 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/kube-rbac-proxy-frr/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.385567 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/kube-rbac-proxy/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.509267 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/reloader/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.592621 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-rhrms_aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed/frr-k8s-webhook-server/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.692418 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:14:47 crc kubenswrapper[4810]: E0110 07:14:47.692641 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.794361 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-78d994474f-c66kc_1c2f8c1a-da9b-4758-8470-7495e89762af/manager/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.928408 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75f446db8b-dsgn2_d5e40a28-b80a-4ba9-87e7-039e63e9e4d0/webhook-server/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.943561 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/frr/0.log" Jan 10 07:14:47 crc kubenswrapper[4810]: I0110 07:14:47.994017 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7rjff_98e1cf89-483c-49fc-a8b8-e89615b4d86d/kube-rbac-proxy/0.log" Jan 10 07:14:48 crc kubenswrapper[4810]: I0110 07:14:48.182990 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7rjff_98e1cf89-483c-49fc-a8b8-e89615b4d86d/speaker/0.log" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.143739 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd"] Jan 10 07:15:00 crc kubenswrapper[4810]: E0110 07:15:00.144730 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" containerName="extract-utilities" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.144755 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" containerName="extract-utilities" Jan 10 07:15:00 crc kubenswrapper[4810]: E0110 07:15:00.144792 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" containerName="extract-content" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.144805 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" containerName="extract-content" Jan 10 07:15:00 crc kubenswrapper[4810]: E0110 07:15:00.144827 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" containerName="registry-server" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.144840 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" containerName="registry-server" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.145038 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="548e27ac-ac4b-4ba1-9507-f5d0e3063bc2" containerName="registry-server" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.145720 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.147998 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.150416 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.154334 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd"] Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.275868 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c83c0e0-eb00-4594-8208-554506edefb0-secret-volume\") pod \"collect-profiles-29467155-2w4nd\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.275958 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c83c0e0-eb00-4594-8208-554506edefb0-config-volume\") pod \"collect-profiles-29467155-2w4nd\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.275993 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg5xt\" (UniqueName: \"kubernetes.io/projected/2c83c0e0-eb00-4594-8208-554506edefb0-kube-api-access-mg5xt\") pod \"collect-profiles-29467155-2w4nd\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.377349 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c83c0e0-eb00-4594-8208-554506edefb0-config-volume\") pod \"collect-profiles-29467155-2w4nd\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.377399 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg5xt\" (UniqueName: \"kubernetes.io/projected/2c83c0e0-eb00-4594-8208-554506edefb0-kube-api-access-mg5xt\") pod \"collect-profiles-29467155-2w4nd\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.377451 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c83c0e0-eb00-4594-8208-554506edefb0-secret-volume\") pod \"collect-profiles-29467155-2w4nd\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.378182 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c83c0e0-eb00-4594-8208-554506edefb0-config-volume\") pod \"collect-profiles-29467155-2w4nd\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.382970 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c83c0e0-eb00-4594-8208-554506edefb0-secret-volume\") pod \"collect-profiles-29467155-2w4nd\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.402390 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg5xt\" (UniqueName: \"kubernetes.io/projected/2c83c0e0-eb00-4594-8208-554506edefb0-kube-api-access-mg5xt\") pod \"collect-profiles-29467155-2w4nd\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.463284 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:00 crc kubenswrapper[4810]: I0110 07:15:00.696392 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd"] Jan 10 07:15:01 crc kubenswrapper[4810]: I0110 07:15:01.293701 4810 generic.go:334] "Generic (PLEG): container finished" podID="2c83c0e0-eb00-4594-8208-554506edefb0" containerID="40a1b1c60b29872c2db022ee26e37dd7a51df39d929e5ab49afd1bd98b6d8ccd" exitCode=0 Jan 10 07:15:01 crc kubenswrapper[4810]: I0110 07:15:01.293907 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" event={"ID":"2c83c0e0-eb00-4594-8208-554506edefb0","Type":"ContainerDied","Data":"40a1b1c60b29872c2db022ee26e37dd7a51df39d929e5ab49afd1bd98b6d8ccd"} Jan 10 07:15:01 crc kubenswrapper[4810]: I0110 07:15:01.294003 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" event={"ID":"2c83c0e0-eb00-4594-8208-554506edefb0","Type":"ContainerStarted","Data":"4990550a54bfefb99716534e395d05e71c5684f9f2fb161a1048348bc001b70b"} Jan 10 07:15:01 crc kubenswrapper[4810]: I0110 07:15:01.698266 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:15:01 crc kubenswrapper[4810]: E0110 07:15:01.698566 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:15:02 crc kubenswrapper[4810]: I0110 07:15:02.532406 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:02 crc kubenswrapper[4810]: I0110 07:15:02.608390 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5xt\" (UniqueName: \"kubernetes.io/projected/2c83c0e0-eb00-4594-8208-554506edefb0-kube-api-access-mg5xt\") pod \"2c83c0e0-eb00-4594-8208-554506edefb0\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " Jan 10 07:15:02 crc kubenswrapper[4810]: I0110 07:15:02.608451 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c83c0e0-eb00-4594-8208-554506edefb0-config-volume\") pod \"2c83c0e0-eb00-4594-8208-554506edefb0\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " Jan 10 07:15:02 crc kubenswrapper[4810]: I0110 07:15:02.608538 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c83c0e0-eb00-4594-8208-554506edefb0-secret-volume\") pod \"2c83c0e0-eb00-4594-8208-554506edefb0\" (UID: \"2c83c0e0-eb00-4594-8208-554506edefb0\") " Jan 10 07:15:02 crc kubenswrapper[4810]: I0110 07:15:02.618629 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c83c0e0-eb00-4594-8208-554506edefb0-kube-api-access-mg5xt" (OuterVolumeSpecName: "kube-api-access-mg5xt") pod "2c83c0e0-eb00-4594-8208-554506edefb0" (UID: "2c83c0e0-eb00-4594-8208-554506edefb0"). InnerVolumeSpecName "kube-api-access-mg5xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:15:02 crc kubenswrapper[4810]: I0110 07:15:02.618989 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c83c0e0-eb00-4594-8208-554506edefb0-config-volume" (OuterVolumeSpecName: "config-volume") pod "2c83c0e0-eb00-4594-8208-554506edefb0" (UID: "2c83c0e0-eb00-4594-8208-554506edefb0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 10 07:15:02 crc kubenswrapper[4810]: I0110 07:15:02.622412 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c83c0e0-eb00-4594-8208-554506edefb0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2c83c0e0-eb00-4594-8208-554506edefb0" (UID: "2c83c0e0-eb00-4594-8208-554506edefb0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 10 07:15:02 crc kubenswrapper[4810]: I0110 07:15:02.710323 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5xt\" (UniqueName: \"kubernetes.io/projected/2c83c0e0-eb00-4594-8208-554506edefb0-kube-api-access-mg5xt\") on node \"crc\" DevicePath \"\"" Jan 10 07:15:02 crc kubenswrapper[4810]: I0110 07:15:02.710380 4810 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c83c0e0-eb00-4594-8208-554506edefb0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 10 07:15:02 crc kubenswrapper[4810]: I0110 07:15:02.710394 4810 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c83c0e0-eb00-4594-8208-554506edefb0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 10 07:15:03 crc kubenswrapper[4810]: I0110 07:15:03.306375 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" event={"ID":"2c83c0e0-eb00-4594-8208-554506edefb0","Type":"ContainerDied","Data":"4990550a54bfefb99716534e395d05e71c5684f9f2fb161a1048348bc001b70b"} Jan 10 07:15:03 crc kubenswrapper[4810]: I0110 07:15:03.306419 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4990550a54bfefb99716534e395d05e71c5684f9f2fb161a1048348bc001b70b" Jan 10 07:15:03 crc kubenswrapper[4810]: I0110 07:15:03.306483 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29467155-2w4nd" Jan 10 07:15:11 crc kubenswrapper[4810]: I0110 07:15:11.839108 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/util/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.033694 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/util/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.074073 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/pull/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.091488 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/pull/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.272821 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/extract/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.273484 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/util/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.281355 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/pull/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.427018 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-utilities/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.614117 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-content/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.619897 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-content/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.642164 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-utilities/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.692776 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:15:12 crc kubenswrapper[4810]: E0110 07:15:12.692985 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.761328 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-utilities/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.838881 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-content/0.log" Jan 10 07:15:12 crc kubenswrapper[4810]: I0110 07:15:12.972259 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-utilities/0.log" Jan 10 07:15:13 crc kubenswrapper[4810]: I0110 07:15:13.178266 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/registry-server/0.log" Jan 10 07:15:13 crc kubenswrapper[4810]: I0110 07:15:13.183869 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-utilities/0.log" Jan 10 07:15:13 crc kubenswrapper[4810]: I0110 07:15:13.207115 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-content/0.log" Jan 10 07:15:13 crc kubenswrapper[4810]: I0110 07:15:13.224453 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-content/0.log" Jan 10 07:15:13 crc kubenswrapper[4810]: I0110 07:15:13.387285 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-content/0.log" Jan 10 07:15:13 crc kubenswrapper[4810]: I0110 07:15:13.442929 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-utilities/0.log" Jan 10 07:15:13 crc kubenswrapper[4810]: I0110 07:15:13.575883 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mknkg_9e7dec44-02e8-4784-8062-0fc637ebb92f/marketplace-operator/0.log" Jan 10 07:15:13 crc kubenswrapper[4810]: I0110 07:15:13.667491 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/registry-server/0.log" Jan 10 07:15:13 crc kubenswrapper[4810]: I0110 07:15:13.713187 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-utilities/0.log" Jan 10 07:15:13 crc kubenswrapper[4810]: I0110 07:15:13.989849 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-utilities/0.log" Jan 10 07:15:14 crc kubenswrapper[4810]: I0110 07:15:14.010966 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-content/0.log" Jan 10 07:15:14 crc kubenswrapper[4810]: I0110 07:15:14.035736 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-content/0.log" Jan 10 07:15:14 crc kubenswrapper[4810]: I0110 07:15:14.152926 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-utilities/0.log" Jan 10 07:15:14 crc kubenswrapper[4810]: I0110 07:15:14.194459 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-content/0.log" Jan 10 07:15:14 crc kubenswrapper[4810]: I0110 07:15:14.292735 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/registry-server/0.log" Jan 10 07:15:14 crc kubenswrapper[4810]: I0110 07:15:14.329761 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-utilities/0.log" Jan 10 07:15:14 crc kubenswrapper[4810]: I0110 07:15:14.495476 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-content/0.log" Jan 10 07:15:14 crc kubenswrapper[4810]: I0110 07:15:14.508178 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-content/0.log" Jan 10 07:15:14 crc kubenswrapper[4810]: I0110 07:15:14.544607 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-utilities/0.log" Jan 10 07:15:14 crc kubenswrapper[4810]: I0110 07:15:14.656847 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-utilities/0.log" Jan 10 07:15:14 crc kubenswrapper[4810]: I0110 07:15:14.733809 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-content/0.log" Jan 10 07:15:15 crc kubenswrapper[4810]: I0110 07:15:15.048391 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/registry-server/0.log" Jan 10 07:15:24 crc kubenswrapper[4810]: I0110 07:15:24.692455 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:15:24 crc kubenswrapper[4810]: E0110 07:15:24.693212 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:15:33 crc kubenswrapper[4810]: I0110 07:15:33.769148 4810 scope.go:117] "RemoveContainer" containerID="589f268af0d9472143b2ab83467e2d4afc9fab34e7280357abbbddbf598c305f" Jan 10 07:15:33 crc kubenswrapper[4810]: I0110 07:15:33.818171 4810 scope.go:117] "RemoveContainer" containerID="235032ea543cc4a23073e4308f38172ff8b77dd0959775e10e3677c574f078ff" Jan 10 07:15:33 crc kubenswrapper[4810]: I0110 07:15:33.840541 4810 scope.go:117] "RemoveContainer" containerID="80146de48bbad49ad51595c8fc2edb5e797e8b508e2a871d9c9526257e56939f" Jan 10 07:15:33 crc kubenswrapper[4810]: I0110 07:15:33.864436 4810 scope.go:117] "RemoveContainer" containerID="dbbbf84eae6e223dfa07365e667f7ca09deeb91c3e3f01d0cc2bb1f81d61b9a4" Jan 10 07:15:33 crc kubenswrapper[4810]: I0110 07:15:33.887246 4810 scope.go:117] "RemoveContainer" containerID="860f0c514064154339f307f0e5c5a236022aa3932902ce8237f0e3822515f59c" Jan 10 07:15:35 crc kubenswrapper[4810]: I0110 07:15:35.693034 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:15:35 crc kubenswrapper[4810]: E0110 07:15:35.693405 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:15:50 crc kubenswrapper[4810]: I0110 07:15:50.693037 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:15:50 crc kubenswrapper[4810]: E0110 07:15:50.693755 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:16:03 crc kubenswrapper[4810]: I0110 07:16:03.694014 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:16:03 crc kubenswrapper[4810]: E0110 07:16:03.694816 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:16:18 crc kubenswrapper[4810]: I0110 07:16:18.692654 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:16:18 crc kubenswrapper[4810]: E0110 07:16:18.693475 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:16:25 crc kubenswrapper[4810]: I0110 07:16:25.817271 4810 generic.go:334] "Generic (PLEG): container finished" podID="1ece9af6-7d99-4335-adcc-28378b4222cb" containerID="85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99" exitCode=0 Jan 10 07:16:25 crc kubenswrapper[4810]: I0110 07:16:25.817366 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x2ftf/must-gather-m5n5r" event={"ID":"1ece9af6-7d99-4335-adcc-28378b4222cb","Type":"ContainerDied","Data":"85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99"} Jan 10 07:16:25 crc kubenswrapper[4810]: I0110 07:16:25.818980 4810 scope.go:117] "RemoveContainer" containerID="85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99" Jan 10 07:16:26 crc kubenswrapper[4810]: I0110 07:16:26.089307 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x2ftf_must-gather-m5n5r_1ece9af6-7d99-4335-adcc-28378b4222cb/gather/0.log" Jan 10 07:16:32 crc kubenswrapper[4810]: I0110 07:16:31.696156 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:16:32 crc kubenswrapper[4810]: E0110 07:16:31.696879 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:16:32 crc kubenswrapper[4810]: I0110 07:16:32.868912 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x2ftf/must-gather-m5n5r"] Jan 10 07:16:32 crc kubenswrapper[4810]: I0110 07:16:32.869707 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-x2ftf/must-gather-m5n5r" podUID="1ece9af6-7d99-4335-adcc-28378b4222cb" containerName="copy" containerID="cri-o://c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6" gracePeriod=2 Jan 10 07:16:32 crc kubenswrapper[4810]: I0110 07:16:32.874558 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x2ftf/must-gather-m5n5r"] Jan 10 07:16:33 crc kubenswrapper[4810]: I0110 07:16:33.889818 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x2ftf_must-gather-m5n5r_1ece9af6-7d99-4335-adcc-28378b4222cb/copy/0.log" Jan 10 07:16:33 crc kubenswrapper[4810]: I0110 07:16:33.891553 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x2ftf/must-gather-m5n5r" Jan 10 07:16:33 crc kubenswrapper[4810]: I0110 07:16:33.959956 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x2ftf_must-gather-m5n5r_1ece9af6-7d99-4335-adcc-28378b4222cb/copy/0.log" Jan 10 07:16:33 crc kubenswrapper[4810]: I0110 07:16:33.960537 4810 generic.go:334] "Generic (PLEG): container finished" podID="1ece9af6-7d99-4335-adcc-28378b4222cb" containerID="c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6" exitCode=143 Jan 10 07:16:33 crc kubenswrapper[4810]: I0110 07:16:33.960598 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x2ftf/must-gather-m5n5r" Jan 10 07:16:33 crc kubenswrapper[4810]: I0110 07:16:33.960634 4810 scope.go:117] "RemoveContainer" containerID="c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6" Jan 10 07:16:33 crc kubenswrapper[4810]: I0110 07:16:33.977820 4810 scope.go:117] "RemoveContainer" containerID="85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99" Jan 10 07:16:34 crc kubenswrapper[4810]: I0110 07:16:34.011966 4810 scope.go:117] "RemoveContainer" containerID="c1c305b4cf7c2818b08e61f5f2903f59a29c52d26ff91b35319410d712474fad" Jan 10 07:16:34 crc kubenswrapper[4810]: I0110 07:16:34.016183 4810 scope.go:117] "RemoveContainer" containerID="c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6" Jan 10 07:16:34 crc kubenswrapper[4810]: E0110 07:16:34.016522 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6\": container with ID starting with c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6 not found: ID does not exist" containerID="c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6" Jan 10 07:16:34 crc kubenswrapper[4810]: I0110 07:16:34.016551 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6"} err="failed to get container status \"c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6\": rpc error: code = NotFound desc = could not find container \"c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6\": container with ID starting with c1c1b4c6d5866edafbcb3077f8d279b2c7d4ac2106b8885933e4ccb55b0553c6 not found: ID does not exist" Jan 10 07:16:34 crc kubenswrapper[4810]: I0110 07:16:34.016575 4810 scope.go:117] "RemoveContainer" containerID="85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99" Jan 10 07:16:34 crc kubenswrapper[4810]: E0110 07:16:34.016909 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99\": container with ID starting with 85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99 not found: ID does not exist" containerID="85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99" Jan 10 07:16:34 crc kubenswrapper[4810]: I0110 07:16:34.016978 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99"} err="failed to get container status \"85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99\": rpc error: code = NotFound desc = could not find container \"85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99\": container with ID starting with 85c716b34cdc433b67d1bebf892bd4b9c647a3e348cb7cb567bd04b657a53f99 not found: ID does not exist" Jan 10 07:16:34 crc kubenswrapper[4810]: I0110 07:16:34.064768 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtpfs\" (UniqueName: \"kubernetes.io/projected/1ece9af6-7d99-4335-adcc-28378b4222cb-kube-api-access-xtpfs\") pod \"1ece9af6-7d99-4335-adcc-28378b4222cb\" (UID: \"1ece9af6-7d99-4335-adcc-28378b4222cb\") " Jan 10 07:16:34 crc kubenswrapper[4810]: I0110 07:16:34.064898 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ece9af6-7d99-4335-adcc-28378b4222cb-must-gather-output\") pod \"1ece9af6-7d99-4335-adcc-28378b4222cb\" (UID: \"1ece9af6-7d99-4335-adcc-28378b4222cb\") " Jan 10 07:16:34 crc kubenswrapper[4810]: I0110 07:16:34.071224 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ece9af6-7d99-4335-adcc-28378b4222cb-kube-api-access-xtpfs" (OuterVolumeSpecName: "kube-api-access-xtpfs") pod "1ece9af6-7d99-4335-adcc-28378b4222cb" (UID: "1ece9af6-7d99-4335-adcc-28378b4222cb"). InnerVolumeSpecName "kube-api-access-xtpfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:16:34 crc kubenswrapper[4810]: I0110 07:16:34.125964 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ece9af6-7d99-4335-adcc-28378b4222cb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1ece9af6-7d99-4335-adcc-28378b4222cb" (UID: "1ece9af6-7d99-4335-adcc-28378b4222cb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:16:34 crc kubenswrapper[4810]: I0110 07:16:34.166932 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtpfs\" (UniqueName: \"kubernetes.io/projected/1ece9af6-7d99-4335-adcc-28378b4222cb-kube-api-access-xtpfs\") on node \"crc\" DevicePath \"\"" Jan 10 07:16:34 crc kubenswrapper[4810]: I0110 07:16:34.167271 4810 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ece9af6-7d99-4335-adcc-28378b4222cb-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 10 07:16:35 crc kubenswrapper[4810]: I0110 07:16:35.705825 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ece9af6-7d99-4335-adcc-28378b4222cb" path="/var/lib/kubelet/pods/1ece9af6-7d99-4335-adcc-28378b4222cb/volumes" Jan 10 07:16:42 crc kubenswrapper[4810]: I0110 07:16:42.692378 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:16:42 crc kubenswrapper[4810]: E0110 07:16:42.694284 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:16:55 crc kubenswrapper[4810]: I0110 07:16:55.692615 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:16:55 crc kubenswrapper[4810]: E0110 07:16:55.693487 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:17:09 crc kubenswrapper[4810]: I0110 07:17:09.693151 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:17:09 crc kubenswrapper[4810]: E0110 07:17:09.693964 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:17:21 crc kubenswrapper[4810]: I0110 07:17:21.698705 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:17:21 crc kubenswrapper[4810]: E0110 07:17:21.699587 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:17:34 crc kubenswrapper[4810]: I0110 07:17:34.077122 4810 scope.go:117] "RemoveContainer" containerID="aec6c2dbe10e2ad998aa2121590251241d3f613f7290dedeabc60f8d5ebd564a" Jan 10 07:17:35 crc kubenswrapper[4810]: I0110 07:17:35.692992 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:17:35 crc kubenswrapper[4810]: E0110 07:17:35.693730 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:17:49 crc kubenswrapper[4810]: I0110 07:17:49.693376 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:17:49 crc kubenswrapper[4810]: E0110 07:17:49.694138 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:18:03 crc kubenswrapper[4810]: I0110 07:18:03.692696 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:18:03 crc kubenswrapper[4810]: E0110 07:18:03.693554 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:18:16 crc kubenswrapper[4810]: I0110 07:18:16.693426 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:18:16 crc kubenswrapper[4810]: E0110 07:18:16.694357 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:18:29 crc kubenswrapper[4810]: I0110 07:18:29.692989 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:18:29 crc kubenswrapper[4810]: E0110 07:18:29.694590 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:18:34 crc kubenswrapper[4810]: I0110 07:18:34.132908 4810 scope.go:117] "RemoveContainer" containerID="ef91f5cd09e625b653f15d0ad1bd620293a058cc6b1f41e3c7658b45bb76afc8" Jan 10 07:18:40 crc kubenswrapper[4810]: I0110 07:18:40.693348 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:18:40 crc kubenswrapper[4810]: E0110 07:18:40.694159 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:18:55 crc kubenswrapper[4810]: I0110 07:18:55.692892 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:18:55 crc kubenswrapper[4810]: E0110 07:18:55.694052 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:19:06 crc kubenswrapper[4810]: I0110 07:19:06.693296 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:19:06 crc kubenswrapper[4810]: E0110 07:19:06.694226 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:19:18 crc kubenswrapper[4810]: I0110 07:19:18.694044 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:19:18 crc kubenswrapper[4810]: E0110 07:19:18.695065 4810 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-8c5qp_openshift-machine-config-operator(b5b79429-9259-412f-bab8-27865ab7029b)\"" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" Jan 10 07:19:26 crc kubenswrapper[4810]: I0110 07:19:26.970921 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-78676/must-gather-5q89n"] Jan 10 07:19:26 crc kubenswrapper[4810]: E0110 07:19:26.971818 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c83c0e0-eb00-4594-8208-554506edefb0" containerName="collect-profiles" Jan 10 07:19:26 crc kubenswrapper[4810]: I0110 07:19:26.971838 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c83c0e0-eb00-4594-8208-554506edefb0" containerName="collect-profiles" Jan 10 07:19:26 crc kubenswrapper[4810]: E0110 07:19:26.971864 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ece9af6-7d99-4335-adcc-28378b4222cb" containerName="gather" Jan 10 07:19:26 crc kubenswrapper[4810]: I0110 07:19:26.971905 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ece9af6-7d99-4335-adcc-28378b4222cb" containerName="gather" Jan 10 07:19:26 crc kubenswrapper[4810]: E0110 07:19:26.971934 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ece9af6-7d99-4335-adcc-28378b4222cb" containerName="copy" Jan 10 07:19:26 crc kubenswrapper[4810]: I0110 07:19:26.971949 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ece9af6-7d99-4335-adcc-28378b4222cb" containerName="copy" Jan 10 07:19:26 crc kubenswrapper[4810]: I0110 07:19:26.972115 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ece9af6-7d99-4335-adcc-28378b4222cb" containerName="gather" Jan 10 07:19:26 crc kubenswrapper[4810]: I0110 07:19:26.972136 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c83c0e0-eb00-4594-8208-554506edefb0" containerName="collect-profiles" Jan 10 07:19:26 crc kubenswrapper[4810]: I0110 07:19:26.972153 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ece9af6-7d99-4335-adcc-28378b4222cb" containerName="copy" Jan 10 07:19:26 crc kubenswrapper[4810]: I0110 07:19:26.973219 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78676/must-gather-5q89n" Jan 10 07:19:26 crc kubenswrapper[4810]: I0110 07:19:26.978703 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-78676"/"openshift-service-ca.crt" Jan 10 07:19:26 crc kubenswrapper[4810]: I0110 07:19:26.979717 4810 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-78676"/"kube-root-ca.crt" Jan 10 07:19:26 crc kubenswrapper[4810]: I0110 07:19:26.981928 4810 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-78676"/"default-dockercfg-9nxk9" Jan 10 07:19:27 crc kubenswrapper[4810]: I0110 07:19:27.021614 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-78676/must-gather-5q89n"] Jan 10 07:19:27 crc kubenswrapper[4810]: I0110 07:19:27.078554 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-must-gather-output\") pod \"must-gather-5q89n\" (UID: \"3452f6d8-cfdf-44fb-8957-fec971ac4bc6\") " pod="openshift-must-gather-78676/must-gather-5q89n" Jan 10 07:19:27 crc kubenswrapper[4810]: I0110 07:19:27.078610 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gm7r\" (UniqueName: \"kubernetes.io/projected/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-kube-api-access-2gm7r\") pod \"must-gather-5q89n\" (UID: \"3452f6d8-cfdf-44fb-8957-fec971ac4bc6\") " pod="openshift-must-gather-78676/must-gather-5q89n" Jan 10 07:19:27 crc kubenswrapper[4810]: I0110 07:19:27.179745 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-must-gather-output\") pod \"must-gather-5q89n\" (UID: \"3452f6d8-cfdf-44fb-8957-fec971ac4bc6\") " pod="openshift-must-gather-78676/must-gather-5q89n" Jan 10 07:19:27 crc kubenswrapper[4810]: I0110 07:19:27.179800 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gm7r\" (UniqueName: \"kubernetes.io/projected/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-kube-api-access-2gm7r\") pod \"must-gather-5q89n\" (UID: \"3452f6d8-cfdf-44fb-8957-fec971ac4bc6\") " pod="openshift-must-gather-78676/must-gather-5q89n" Jan 10 07:19:27 crc kubenswrapper[4810]: I0110 07:19:27.180241 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-must-gather-output\") pod \"must-gather-5q89n\" (UID: \"3452f6d8-cfdf-44fb-8957-fec971ac4bc6\") " pod="openshift-must-gather-78676/must-gather-5q89n" Jan 10 07:19:27 crc kubenswrapper[4810]: I0110 07:19:27.206667 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gm7r\" (UniqueName: \"kubernetes.io/projected/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-kube-api-access-2gm7r\") pod \"must-gather-5q89n\" (UID: \"3452f6d8-cfdf-44fb-8957-fec971ac4bc6\") " pod="openshift-must-gather-78676/must-gather-5q89n" Jan 10 07:19:27 crc kubenswrapper[4810]: I0110 07:19:27.302090 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78676/must-gather-5q89n" Jan 10 07:19:27 crc kubenswrapper[4810]: I0110 07:19:27.703127 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-78676/must-gather-5q89n"] Jan 10 07:19:28 crc kubenswrapper[4810]: I0110 07:19:28.034396 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78676/must-gather-5q89n" event={"ID":"3452f6d8-cfdf-44fb-8957-fec971ac4bc6","Type":"ContainerStarted","Data":"be65ebe02619038457f21b414130729c37d19b43c58f845d4abf51db0cdf016c"} Jan 10 07:19:29 crc kubenswrapper[4810]: I0110 07:19:29.041590 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78676/must-gather-5q89n" event={"ID":"3452f6d8-cfdf-44fb-8957-fec971ac4bc6","Type":"ContainerStarted","Data":"2aa7b7515f6d6c1c97057f6afde83d505200ba984e14927533c16fee2049c0ab"} Jan 10 07:19:29 crc kubenswrapper[4810]: I0110 07:19:29.041940 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78676/must-gather-5q89n" event={"ID":"3452f6d8-cfdf-44fb-8957-fec971ac4bc6","Type":"ContainerStarted","Data":"704cd4c95e8b13ef872271d0b9d842a153414dd0b19883307b66b1367d27709a"} Jan 10 07:19:31 crc kubenswrapper[4810]: I0110 07:19:31.696545 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:19:32 crc kubenswrapper[4810]: I0110 07:19:32.057528 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"4aa9c464104c269dca7a0d86c334ea975f2bd2117d699200a7a55f2962ad28e7"} Jan 10 07:19:33 crc kubenswrapper[4810]: I0110 07:19:33.081821 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-78676/must-gather-5q89n" podStartSLOduration=7.081800103 podStartE2EDuration="7.081800103s" podCreationTimestamp="2026-01-10 07:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 07:19:29.067703567 +0000 UTC m=+1997.683196450" watchObservedRunningTime="2026-01-10 07:19:33.081800103 +0000 UTC m=+2001.697292986" Jan 10 07:19:34 crc kubenswrapper[4810]: I0110 07:19:34.180520 4810 scope.go:117] "RemoveContainer" containerID="55f11bcb141c37db068248bb010095bd67ea3c2f09cacd84260227af552d7be3" Jan 10 07:20:12 crc kubenswrapper[4810]: I0110 07:20:12.291802 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-r9mqp_49455ed9-e8ab-44c8-9075-2ffbbebe36a8/control-plane-machine-set-operator/0.log" Jan 10 07:20:12 crc kubenswrapper[4810]: I0110 07:20:12.463761 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t88kw_85c62a01-c818-47b4-92fd-bcd87d8218a8/kube-rbac-proxy/0.log" Jan 10 07:20:12 crc kubenswrapper[4810]: I0110 07:20:12.470504 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t88kw_85c62a01-c818-47b4-92fd-bcd87d8218a8/machine-api-operator/0.log" Jan 10 07:20:38 crc kubenswrapper[4810]: I0110 07:20:38.917836 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-qv4c2_7a4eb8a0-6ca1-4dca-a9a5-37a00569037d/controller/0.log" Jan 10 07:20:38 crc kubenswrapper[4810]: I0110 07:20:38.929453 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-qv4c2_7a4eb8a0-6ca1-4dca-a9a5-37a00569037d/kube-rbac-proxy/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.074618 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-frr-files/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.359222 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-metrics/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.397617 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-frr-files/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.433043 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-reloader/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.439510 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-reloader/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.572141 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-reloader/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.600243 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-frr-files/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.615874 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-metrics/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.643815 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-metrics/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.811827 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-metrics/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.822050 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-reloader/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.824674 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/cp-frr-files/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.833740 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/controller/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.983265 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/frr-metrics/0.log" Jan 10 07:20:39 crc kubenswrapper[4810]: I0110 07:20:39.988183 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/kube-rbac-proxy/0.log" Jan 10 07:20:40 crc kubenswrapper[4810]: I0110 07:20:40.018793 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/kube-rbac-proxy-frr/0.log" Jan 10 07:20:40 crc kubenswrapper[4810]: I0110 07:20:40.186789 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-rhrms_aa0c6bd0-4efa-4639-b474-b77ee0c4a2ed/frr-k8s-webhook-server/0.log" Jan 10 07:20:40 crc kubenswrapper[4810]: I0110 07:20:40.225385 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/reloader/0.log" Jan 10 07:20:40 crc kubenswrapper[4810]: I0110 07:20:40.463574 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-78d994474f-c66kc_1c2f8c1a-da9b-4758-8470-7495e89762af/manager/0.log" Jan 10 07:20:40 crc kubenswrapper[4810]: I0110 07:20:40.533731 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fxt7n_ec952ca1-31c1-4f2c-ae6a-2a8c271304e0/frr/0.log" Jan 10 07:20:40 crc kubenswrapper[4810]: I0110 07:20:40.621701 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75f446db8b-dsgn2_d5e40a28-b80a-4ba9-87e7-039e63e9e4d0/webhook-server/0.log" Jan 10 07:20:40 crc kubenswrapper[4810]: I0110 07:20:40.653160 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7rjff_98e1cf89-483c-49fc-a8b8-e89615b4d86d/kube-rbac-proxy/0.log" Jan 10 07:20:40 crc kubenswrapper[4810]: I0110 07:20:40.794805 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7rjff_98e1cf89-483c-49fc-a8b8-e89615b4d86d/speaker/0.log" Jan 10 07:21:04 crc kubenswrapper[4810]: I0110 07:21:04.213057 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/util/0.log" Jan 10 07:21:04 crc kubenswrapper[4810]: I0110 07:21:04.372466 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/util/0.log" Jan 10 07:21:04 crc kubenswrapper[4810]: I0110 07:21:04.386108 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/pull/0.log" Jan 10 07:21:04 crc kubenswrapper[4810]: I0110 07:21:04.405684 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/pull/0.log" Jan 10 07:21:04 crc kubenswrapper[4810]: I0110 07:21:04.564595 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/extract/0.log" Jan 10 07:21:04 crc kubenswrapper[4810]: I0110 07:21:04.568097 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/pull/0.log" Jan 10 07:21:04 crc kubenswrapper[4810]: I0110 07:21:04.576281 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4jwx5h_97ca5502-79e5-4da9-9267-8cd12ce1d9ef/util/0.log" Jan 10 07:21:04 crc kubenswrapper[4810]: I0110 07:21:04.729310 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-utilities/0.log" Jan 10 07:21:04 crc kubenswrapper[4810]: I0110 07:21:04.871029 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-utilities/0.log" Jan 10 07:21:04 crc kubenswrapper[4810]: I0110 07:21:04.903448 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-content/0.log" Jan 10 07:21:04 crc kubenswrapper[4810]: I0110 07:21:04.933413 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-content/0.log" Jan 10 07:21:05 crc kubenswrapper[4810]: I0110 07:21:05.075127 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-utilities/0.log" Jan 10 07:21:05 crc kubenswrapper[4810]: I0110 07:21:05.105114 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/extract-content/0.log" Jan 10 07:21:05 crc kubenswrapper[4810]: I0110 07:21:05.265336 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-utilities/0.log" Jan 10 07:21:05 crc kubenswrapper[4810]: I0110 07:21:05.498071 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-utilities/0.log" Jan 10 07:21:05 crc kubenswrapper[4810]: I0110 07:21:05.502039 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-db5l6_678a3ed9-5d4e-493f-9be6-fad386a36363/registry-server/0.log" Jan 10 07:21:05 crc kubenswrapper[4810]: I0110 07:21:05.504820 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-content/0.log" Jan 10 07:21:05 crc kubenswrapper[4810]: I0110 07:21:05.559625 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-content/0.log" Jan 10 07:21:05 crc kubenswrapper[4810]: I0110 07:21:05.669881 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-content/0.log" Jan 10 07:21:05 crc kubenswrapper[4810]: I0110 07:21:05.691628 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/extract-utilities/0.log" Jan 10 07:21:05 crc kubenswrapper[4810]: I0110 07:21:05.904367 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mknkg_9e7dec44-02e8-4784-8062-0fc637ebb92f/marketplace-operator/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.036643 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fblgv_98c79d9f-6b91-4a9d-9d4f-b3ec819b2f92/registry-server/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.123796 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-utilities/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.237539 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-content/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.237564 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-content/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.255821 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-utilities/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.433532 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-utilities/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.447111 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/extract-content/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.496040 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6cvnc_783ca84a-455c-4f14-8b77-4a9689c46026/registry-server/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.605457 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-utilities/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.755689 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-content/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.760375 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-utilities/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.789953 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-content/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.931345 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-utilities/0.log" Jan 10 07:21:06 crc kubenswrapper[4810]: I0110 07:21:06.936057 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/extract-content/0.log" Jan 10 07:21:07 crc kubenswrapper[4810]: I0110 07:21:07.391517 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2n6cp_61a578a2-8b7d-4d6d-94bd-a258853f79a2/registry-server/0.log" Jan 10 07:21:50 crc kubenswrapper[4810]: I0110 07:21:50.883559 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:21:50 crc kubenswrapper[4810]: I0110 07:21:50.884176 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:22:18 crc kubenswrapper[4810]: I0110 07:22:18.992668 4810 generic.go:334] "Generic (PLEG): container finished" podID="3452f6d8-cfdf-44fb-8957-fec971ac4bc6" containerID="704cd4c95e8b13ef872271d0b9d842a153414dd0b19883307b66b1367d27709a" exitCode=0 Jan 10 07:22:18 crc kubenswrapper[4810]: I0110 07:22:18.992792 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78676/must-gather-5q89n" event={"ID":"3452f6d8-cfdf-44fb-8957-fec971ac4bc6","Type":"ContainerDied","Data":"704cd4c95e8b13ef872271d0b9d842a153414dd0b19883307b66b1367d27709a"} Jan 10 07:22:18 crc kubenswrapper[4810]: I0110 07:22:18.993592 4810 scope.go:117] "RemoveContainer" containerID="704cd4c95e8b13ef872271d0b9d842a153414dd0b19883307b66b1367d27709a" Jan 10 07:22:19 crc kubenswrapper[4810]: I0110 07:22:19.098540 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-78676_must-gather-5q89n_3452f6d8-cfdf-44fb-8957-fec971ac4bc6/gather/0.log" Jan 10 07:22:20 crc kubenswrapper[4810]: I0110 07:22:20.882634 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:22:20 crc kubenswrapper[4810]: I0110 07:22:20.882703 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:22:27 crc kubenswrapper[4810]: I0110 07:22:27.717820 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-78676/must-gather-5q89n"] Jan 10 07:22:27 crc kubenswrapper[4810]: I0110 07:22:27.718703 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-78676/must-gather-5q89n" podUID="3452f6d8-cfdf-44fb-8957-fec971ac4bc6" containerName="copy" containerID="cri-o://2aa7b7515f6d6c1c97057f6afde83d505200ba984e14927533c16fee2049c0ab" gracePeriod=2 Jan 10 07:22:27 crc kubenswrapper[4810]: I0110 07:22:27.722709 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-78676/must-gather-5q89n"] Jan 10 07:22:28 crc kubenswrapper[4810]: I0110 07:22:28.048137 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-78676_must-gather-5q89n_3452f6d8-cfdf-44fb-8957-fec971ac4bc6/copy/0.log" Jan 10 07:22:28 crc kubenswrapper[4810]: I0110 07:22:28.048769 4810 generic.go:334] "Generic (PLEG): container finished" podID="3452f6d8-cfdf-44fb-8957-fec971ac4bc6" containerID="2aa7b7515f6d6c1c97057f6afde83d505200ba984e14927533c16fee2049c0ab" exitCode=143 Jan 10 07:22:28 crc kubenswrapper[4810]: I0110 07:22:28.048811 4810 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be65ebe02619038457f21b414130729c37d19b43c58f845d4abf51db0cdf016c" Jan 10 07:22:28 crc kubenswrapper[4810]: I0110 07:22:28.069881 4810 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-78676_must-gather-5q89n_3452f6d8-cfdf-44fb-8957-fec971ac4bc6/copy/0.log" Jan 10 07:22:28 crc kubenswrapper[4810]: I0110 07:22:28.070819 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78676/must-gather-5q89n" Jan 10 07:22:28 crc kubenswrapper[4810]: I0110 07:22:28.173744 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-must-gather-output\") pod \"3452f6d8-cfdf-44fb-8957-fec971ac4bc6\" (UID: \"3452f6d8-cfdf-44fb-8957-fec971ac4bc6\") " Jan 10 07:22:28 crc kubenswrapper[4810]: I0110 07:22:28.174153 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gm7r\" (UniqueName: \"kubernetes.io/projected/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-kube-api-access-2gm7r\") pod \"3452f6d8-cfdf-44fb-8957-fec971ac4bc6\" (UID: \"3452f6d8-cfdf-44fb-8957-fec971ac4bc6\") " Jan 10 07:22:28 crc kubenswrapper[4810]: I0110 07:22:28.186362 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-kube-api-access-2gm7r" (OuterVolumeSpecName: "kube-api-access-2gm7r") pod "3452f6d8-cfdf-44fb-8957-fec971ac4bc6" (UID: "3452f6d8-cfdf-44fb-8957-fec971ac4bc6"). InnerVolumeSpecName "kube-api-access-2gm7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:22:28 crc kubenswrapper[4810]: I0110 07:22:28.237088 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3452f6d8-cfdf-44fb-8957-fec971ac4bc6" (UID: "3452f6d8-cfdf-44fb-8957-fec971ac4bc6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:22:28 crc kubenswrapper[4810]: I0110 07:22:28.275394 4810 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 10 07:22:28 crc kubenswrapper[4810]: I0110 07:22:28.275427 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gm7r\" (UniqueName: \"kubernetes.io/projected/3452f6d8-cfdf-44fb-8957-fec971ac4bc6-kube-api-access-2gm7r\") on node \"crc\" DevicePath \"\"" Jan 10 07:22:29 crc kubenswrapper[4810]: I0110 07:22:29.053676 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78676/must-gather-5q89n" Jan 10 07:22:29 crc kubenswrapper[4810]: I0110 07:22:29.699795 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3452f6d8-cfdf-44fb-8957-fec971ac4bc6" path="/var/lib/kubelet/pods/3452f6d8-cfdf-44fb-8957-fec971ac4bc6/volumes" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.556409 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7pp7x"] Jan 10 07:22:31 crc kubenswrapper[4810]: E0110 07:22:31.556917 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3452f6d8-cfdf-44fb-8957-fec971ac4bc6" containerName="copy" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.556930 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3452f6d8-cfdf-44fb-8957-fec971ac4bc6" containerName="copy" Jan 10 07:22:31 crc kubenswrapper[4810]: E0110 07:22:31.556952 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3452f6d8-cfdf-44fb-8957-fec971ac4bc6" containerName="gather" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.556962 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="3452f6d8-cfdf-44fb-8957-fec971ac4bc6" containerName="gather" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.557217 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3452f6d8-cfdf-44fb-8957-fec971ac4bc6" containerName="gather" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.557236 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="3452f6d8-cfdf-44fb-8957-fec971ac4bc6" containerName="copy" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.558687 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.586286 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7pp7x"] Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.618913 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-catalog-content\") pod \"community-operators-7pp7x\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.618999 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-utilities\") pod \"community-operators-7pp7x\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.619037 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49s6r\" (UniqueName: \"kubernetes.io/projected/00f4b230-395b-4553-837f-3753c105d3db-kube-api-access-49s6r\") pod \"community-operators-7pp7x\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.720086 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-catalog-content\") pod \"community-operators-7pp7x\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.720141 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-utilities\") pod \"community-operators-7pp7x\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.720160 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49s6r\" (UniqueName: \"kubernetes.io/projected/00f4b230-395b-4553-837f-3753c105d3db-kube-api-access-49s6r\") pod \"community-operators-7pp7x\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.720682 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-catalog-content\") pod \"community-operators-7pp7x\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.721278 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-utilities\") pod \"community-operators-7pp7x\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.746480 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49s6r\" (UniqueName: \"kubernetes.io/projected/00f4b230-395b-4553-837f-3753c105d3db-kube-api-access-49s6r\") pod \"community-operators-7pp7x\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:31 crc kubenswrapper[4810]: I0110 07:22:31.885051 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:32 crc kubenswrapper[4810]: I0110 07:22:32.134847 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7pp7x"] Jan 10 07:22:32 crc kubenswrapper[4810]: W0110 07:22:32.143107 4810 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00f4b230_395b_4553_837f_3753c105d3db.slice/crio-f6f205a8bd6deccd4b1c0e0d678605a1d4350b8d628cdd2413dc0f21854cc16c WatchSource:0}: Error finding container f6f205a8bd6deccd4b1c0e0d678605a1d4350b8d628cdd2413dc0f21854cc16c: Status 404 returned error can't find the container with id f6f205a8bd6deccd4b1c0e0d678605a1d4350b8d628cdd2413dc0f21854cc16c Jan 10 07:22:33 crc kubenswrapper[4810]: I0110 07:22:33.074971 4810 generic.go:334] "Generic (PLEG): container finished" podID="00f4b230-395b-4553-837f-3753c105d3db" containerID="60bfff1fc49eb4581507052028781d319363a175c1613a30480371d864d8425d" exitCode=0 Jan 10 07:22:33 crc kubenswrapper[4810]: I0110 07:22:33.075038 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pp7x" event={"ID":"00f4b230-395b-4553-837f-3753c105d3db","Type":"ContainerDied","Data":"60bfff1fc49eb4581507052028781d319363a175c1613a30480371d864d8425d"} Jan 10 07:22:33 crc kubenswrapper[4810]: I0110 07:22:33.075400 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pp7x" event={"ID":"00f4b230-395b-4553-837f-3753c105d3db","Type":"ContainerStarted","Data":"f6f205a8bd6deccd4b1c0e0d678605a1d4350b8d628cdd2413dc0f21854cc16c"} Jan 10 07:22:33 crc kubenswrapper[4810]: I0110 07:22:33.077635 4810 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 10 07:22:34 crc kubenswrapper[4810]: I0110 07:22:34.082389 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pp7x" event={"ID":"00f4b230-395b-4553-837f-3753c105d3db","Type":"ContainerStarted","Data":"7824a40161d8b107f28910a949db2f3fc64bc7ab6909d81b3250c688c3231cde"} Jan 10 07:22:35 crc kubenswrapper[4810]: I0110 07:22:35.088846 4810 generic.go:334] "Generic (PLEG): container finished" podID="00f4b230-395b-4553-837f-3753c105d3db" containerID="7824a40161d8b107f28910a949db2f3fc64bc7ab6909d81b3250c688c3231cde" exitCode=0 Jan 10 07:22:35 crc kubenswrapper[4810]: I0110 07:22:35.088923 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pp7x" event={"ID":"00f4b230-395b-4553-837f-3753c105d3db","Type":"ContainerDied","Data":"7824a40161d8b107f28910a949db2f3fc64bc7ab6909d81b3250c688c3231cde"} Jan 10 07:22:37 crc kubenswrapper[4810]: I0110 07:22:37.106060 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pp7x" event={"ID":"00f4b230-395b-4553-837f-3753c105d3db","Type":"ContainerStarted","Data":"c613f9fdf759aeff9057d8c3191cce63d9fa4395a25cc3d3819cf9e35016126f"} Jan 10 07:22:37 crc kubenswrapper[4810]: I0110 07:22:37.124614 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7pp7x" podStartSLOduration=3.281502846 podStartE2EDuration="6.124597014s" podCreationTimestamp="2026-01-10 07:22:31 +0000 UTC" firstStartedPulling="2026-01-10 07:22:33.077398623 +0000 UTC m=+2181.692891506" lastFinishedPulling="2026-01-10 07:22:35.920492791 +0000 UTC m=+2184.535985674" observedRunningTime="2026-01-10 07:22:37.122998665 +0000 UTC m=+2185.738491558" watchObservedRunningTime="2026-01-10 07:22:37.124597014 +0000 UTC m=+2185.740089897" Jan 10 07:22:41 crc kubenswrapper[4810]: I0110 07:22:41.885357 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:41 crc kubenswrapper[4810]: I0110 07:22:41.885947 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:41 crc kubenswrapper[4810]: I0110 07:22:41.928264 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:42 crc kubenswrapper[4810]: I0110 07:22:42.173109 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:42 crc kubenswrapper[4810]: I0110 07:22:42.231022 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7pp7x"] Jan 10 07:22:44 crc kubenswrapper[4810]: I0110 07:22:44.146915 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7pp7x" podUID="00f4b230-395b-4553-837f-3753c105d3db" containerName="registry-server" containerID="cri-o://c613f9fdf759aeff9057d8c3191cce63d9fa4395a25cc3d3819cf9e35016126f" gracePeriod=2 Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.154025 4810 generic.go:334] "Generic (PLEG): container finished" podID="00f4b230-395b-4553-837f-3753c105d3db" containerID="c613f9fdf759aeff9057d8c3191cce63d9fa4395a25cc3d3819cf9e35016126f" exitCode=0 Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.154072 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pp7x" event={"ID":"00f4b230-395b-4553-837f-3753c105d3db","Type":"ContainerDied","Data":"c613f9fdf759aeff9057d8c3191cce63d9fa4395a25cc3d3819cf9e35016126f"} Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.610514 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.805213 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-utilities\") pod \"00f4b230-395b-4553-837f-3753c105d3db\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.805298 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49s6r\" (UniqueName: \"kubernetes.io/projected/00f4b230-395b-4553-837f-3753c105d3db-kube-api-access-49s6r\") pod \"00f4b230-395b-4553-837f-3753c105d3db\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.805400 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-catalog-content\") pod \"00f4b230-395b-4553-837f-3753c105d3db\" (UID: \"00f4b230-395b-4553-837f-3753c105d3db\") " Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.806477 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-utilities" (OuterVolumeSpecName: "utilities") pod "00f4b230-395b-4553-837f-3753c105d3db" (UID: "00f4b230-395b-4553-837f-3753c105d3db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.810803 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f4b230-395b-4553-837f-3753c105d3db-kube-api-access-49s6r" (OuterVolumeSpecName: "kube-api-access-49s6r") pod "00f4b230-395b-4553-837f-3753c105d3db" (UID: "00f4b230-395b-4553-837f-3753c105d3db"). InnerVolumeSpecName "kube-api-access-49s6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.858488 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00f4b230-395b-4553-837f-3753c105d3db" (UID: "00f4b230-395b-4553-837f-3753c105d3db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.907095 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.907130 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49s6r\" (UniqueName: \"kubernetes.io/projected/00f4b230-395b-4553-837f-3753c105d3db-kube-api-access-49s6r\") on node \"crc\" DevicePath \"\"" Jan 10 07:22:45 crc kubenswrapper[4810]: I0110 07:22:45.907140 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00f4b230-395b-4553-837f-3753c105d3db-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:22:46 crc kubenswrapper[4810]: I0110 07:22:46.164475 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pp7x" event={"ID":"00f4b230-395b-4553-837f-3753c105d3db","Type":"ContainerDied","Data":"f6f205a8bd6deccd4b1c0e0d678605a1d4350b8d628cdd2413dc0f21854cc16c"} Jan 10 07:22:46 crc kubenswrapper[4810]: I0110 07:22:46.164549 4810 scope.go:117] "RemoveContainer" containerID="c613f9fdf759aeff9057d8c3191cce63d9fa4395a25cc3d3819cf9e35016126f" Jan 10 07:22:46 crc kubenswrapper[4810]: I0110 07:22:46.164616 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pp7x" Jan 10 07:22:46 crc kubenswrapper[4810]: I0110 07:22:46.183539 4810 scope.go:117] "RemoveContainer" containerID="7824a40161d8b107f28910a949db2f3fc64bc7ab6909d81b3250c688c3231cde" Jan 10 07:22:46 crc kubenswrapper[4810]: I0110 07:22:46.215508 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7pp7x"] Jan 10 07:22:46 crc kubenswrapper[4810]: I0110 07:22:46.223170 4810 scope.go:117] "RemoveContainer" containerID="60bfff1fc49eb4581507052028781d319363a175c1613a30480371d864d8425d" Jan 10 07:22:46 crc kubenswrapper[4810]: I0110 07:22:46.224603 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7pp7x"] Jan 10 07:22:47 crc kubenswrapper[4810]: I0110 07:22:47.705511 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f4b230-395b-4553-837f-3753c105d3db" path="/var/lib/kubelet/pods/00f4b230-395b-4553-837f-3753c105d3db/volumes" Jan 10 07:22:50 crc kubenswrapper[4810]: I0110 07:22:50.882815 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:22:50 crc kubenswrapper[4810]: I0110 07:22:50.882911 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:22:50 crc kubenswrapper[4810]: I0110 07:22:50.882965 4810 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" Jan 10 07:22:50 crc kubenswrapper[4810]: I0110 07:22:50.883641 4810 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4aa9c464104c269dca7a0d86c334ea975f2bd2117d699200a7a55f2962ad28e7"} pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 10 07:22:50 crc kubenswrapper[4810]: I0110 07:22:50.883709 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" containerID="cri-o://4aa9c464104c269dca7a0d86c334ea975f2bd2117d699200a7a55f2962ad28e7" gracePeriod=600 Jan 10 07:22:51 crc kubenswrapper[4810]: I0110 07:22:51.195300 4810 generic.go:334] "Generic (PLEG): container finished" podID="b5b79429-9259-412f-bab8-27865ab7029b" containerID="4aa9c464104c269dca7a0d86c334ea975f2bd2117d699200a7a55f2962ad28e7" exitCode=0 Jan 10 07:22:51 crc kubenswrapper[4810]: I0110 07:22:51.195562 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerDied","Data":"4aa9c464104c269dca7a0d86c334ea975f2bd2117d699200a7a55f2962ad28e7"} Jan 10 07:22:51 crc kubenswrapper[4810]: I0110 07:22:51.195592 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" event={"ID":"b5b79429-9259-412f-bab8-27865ab7029b","Type":"ContainerStarted","Data":"003a9e63b0417f6a17f2eb34e2958189ae2374ec46963f3628ea8024317d4288"} Jan 10 07:22:51 crc kubenswrapper[4810]: I0110 07:22:51.195614 4810 scope.go:117] "RemoveContainer" containerID="09237d9cafed50dee3060de82ade1a6ff2457c25b77122dcca9e34bb60454910" Jan 10 07:23:48 crc kubenswrapper[4810]: I0110 07:23:48.892280 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dfpwx"] Jan 10 07:23:48 crc kubenswrapper[4810]: E0110 07:23:48.892974 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f4b230-395b-4553-837f-3753c105d3db" containerName="extract-utilities" Jan 10 07:23:48 crc kubenswrapper[4810]: I0110 07:23:48.892987 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f4b230-395b-4553-837f-3753c105d3db" containerName="extract-utilities" Jan 10 07:23:48 crc kubenswrapper[4810]: E0110 07:23:48.893007 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f4b230-395b-4553-837f-3753c105d3db" containerName="extract-content" Jan 10 07:23:48 crc kubenswrapper[4810]: I0110 07:23:48.893013 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f4b230-395b-4553-837f-3753c105d3db" containerName="extract-content" Jan 10 07:23:48 crc kubenswrapper[4810]: E0110 07:23:48.893021 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f4b230-395b-4553-837f-3753c105d3db" containerName="registry-server" Jan 10 07:23:48 crc kubenswrapper[4810]: I0110 07:23:48.893029 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f4b230-395b-4553-837f-3753c105d3db" containerName="registry-server" Jan 10 07:23:48 crc kubenswrapper[4810]: I0110 07:23:48.893120 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f4b230-395b-4553-837f-3753c105d3db" containerName="registry-server" Jan 10 07:23:48 crc kubenswrapper[4810]: I0110 07:23:48.893856 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:48 crc kubenswrapper[4810]: I0110 07:23:48.912956 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfpwx"] Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.070133 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2c5v\" (UniqueName: \"kubernetes.io/projected/c8567de3-c2ba-4a96-be99-b81a2883439c-kube-api-access-t2c5v\") pod \"redhat-marketplace-dfpwx\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.070224 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-utilities\") pod \"redhat-marketplace-dfpwx\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.070248 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-catalog-content\") pod \"redhat-marketplace-dfpwx\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.170916 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2c5v\" (UniqueName: \"kubernetes.io/projected/c8567de3-c2ba-4a96-be99-b81a2883439c-kube-api-access-t2c5v\") pod \"redhat-marketplace-dfpwx\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.170999 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-utilities\") pod \"redhat-marketplace-dfpwx\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.171036 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-catalog-content\") pod \"redhat-marketplace-dfpwx\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.171514 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-catalog-content\") pod \"redhat-marketplace-dfpwx\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.171579 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-utilities\") pod \"redhat-marketplace-dfpwx\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.197690 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2c5v\" (UniqueName: \"kubernetes.io/projected/c8567de3-c2ba-4a96-be99-b81a2883439c-kube-api-access-t2c5v\") pod \"redhat-marketplace-dfpwx\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.216386 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.424156 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfpwx"] Jan 10 07:23:49 crc kubenswrapper[4810]: I0110 07:23:49.566439 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfpwx" event={"ID":"c8567de3-c2ba-4a96-be99-b81a2883439c","Type":"ContainerStarted","Data":"62ce94d5737634f17d231af74953cede5cfa335aa2864fd7a19f126b99edae4d"} Jan 10 07:23:50 crc kubenswrapper[4810]: I0110 07:23:50.578317 4810 generic.go:334] "Generic (PLEG): container finished" podID="c8567de3-c2ba-4a96-be99-b81a2883439c" containerID="0cc3912272d75797b2f4a4d1cd6754fd18cef686e5fdb4793365afee7162fe3c" exitCode=0 Jan 10 07:23:50 crc kubenswrapper[4810]: I0110 07:23:50.578395 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfpwx" event={"ID":"c8567de3-c2ba-4a96-be99-b81a2883439c","Type":"ContainerDied","Data":"0cc3912272d75797b2f4a4d1cd6754fd18cef686e5fdb4793365afee7162fe3c"} Jan 10 07:23:53 crc kubenswrapper[4810]: I0110 07:23:53.597810 4810 generic.go:334] "Generic (PLEG): container finished" podID="c8567de3-c2ba-4a96-be99-b81a2883439c" containerID="6897aaa27ff4dbca72904f8b16d7c8b17ad3cc5b91db8bef14729268a6fda071" exitCode=0 Jan 10 07:23:53 crc kubenswrapper[4810]: I0110 07:23:53.597894 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfpwx" event={"ID":"c8567de3-c2ba-4a96-be99-b81a2883439c","Type":"ContainerDied","Data":"6897aaa27ff4dbca72904f8b16d7c8b17ad3cc5b91db8bef14729268a6fda071"} Jan 10 07:23:54 crc kubenswrapper[4810]: I0110 07:23:54.607543 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfpwx" event={"ID":"c8567de3-c2ba-4a96-be99-b81a2883439c","Type":"ContainerStarted","Data":"c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4"} Jan 10 07:23:54 crc kubenswrapper[4810]: I0110 07:23:54.629880 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dfpwx" podStartSLOduration=3.178048895 podStartE2EDuration="6.629863463s" podCreationTimestamp="2026-01-10 07:23:48 +0000 UTC" firstStartedPulling="2026-01-10 07:23:50.580564553 +0000 UTC m=+2259.196057476" lastFinishedPulling="2026-01-10 07:23:54.032379161 +0000 UTC m=+2262.647872044" observedRunningTime="2026-01-10 07:23:54.625084709 +0000 UTC m=+2263.240577612" watchObservedRunningTime="2026-01-10 07:23:54.629863463 +0000 UTC m=+2263.245356346" Jan 10 07:23:59 crc kubenswrapper[4810]: I0110 07:23:59.217572 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:59 crc kubenswrapper[4810]: I0110 07:23:59.217920 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:59 crc kubenswrapper[4810]: I0110 07:23:59.262331 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:59 crc kubenswrapper[4810]: I0110 07:23:59.687537 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:23:59 crc kubenswrapper[4810]: I0110 07:23:59.741063 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfpwx"] Jan 10 07:24:01 crc kubenswrapper[4810]: I0110 07:24:01.652556 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dfpwx" podUID="c8567de3-c2ba-4a96-be99-b81a2883439c" containerName="registry-server" containerID="cri-o://c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4" gracePeriod=2 Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.490732 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.659012 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-catalog-content\") pod \"c8567de3-c2ba-4a96-be99-b81a2883439c\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.659111 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2c5v\" (UniqueName: \"kubernetes.io/projected/c8567de3-c2ba-4a96-be99-b81a2883439c-kube-api-access-t2c5v\") pod \"c8567de3-c2ba-4a96-be99-b81a2883439c\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.659237 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-utilities\") pod \"c8567de3-c2ba-4a96-be99-b81a2883439c\" (UID: \"c8567de3-c2ba-4a96-be99-b81a2883439c\") " Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.660988 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-utilities" (OuterVolumeSpecName: "utilities") pod "c8567de3-c2ba-4a96-be99-b81a2883439c" (UID: "c8567de3-c2ba-4a96-be99-b81a2883439c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.661341 4810 generic.go:334] "Generic (PLEG): container finished" podID="c8567de3-c2ba-4a96-be99-b81a2883439c" containerID="c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4" exitCode=0 Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.661414 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dfpwx" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.661413 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfpwx" event={"ID":"c8567de3-c2ba-4a96-be99-b81a2883439c","Type":"ContainerDied","Data":"c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4"} Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.661726 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dfpwx" event={"ID":"c8567de3-c2ba-4a96-be99-b81a2883439c","Type":"ContainerDied","Data":"62ce94d5737634f17d231af74953cede5cfa335aa2864fd7a19f126b99edae4d"} Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.661793 4810 scope.go:117] "RemoveContainer" containerID="c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.668416 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8567de3-c2ba-4a96-be99-b81a2883439c-kube-api-access-t2c5v" (OuterVolumeSpecName: "kube-api-access-t2c5v") pod "c8567de3-c2ba-4a96-be99-b81a2883439c" (UID: "c8567de3-c2ba-4a96-be99-b81a2883439c"). InnerVolumeSpecName "kube-api-access-t2c5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.704143 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8567de3-c2ba-4a96-be99-b81a2883439c" (UID: "c8567de3-c2ba-4a96-be99-b81a2883439c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.726625 4810 scope.go:117] "RemoveContainer" containerID="6897aaa27ff4dbca72904f8b16d7c8b17ad3cc5b91db8bef14729268a6fda071" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.755261 4810 scope.go:117] "RemoveContainer" containerID="0cc3912272d75797b2f4a4d1cd6754fd18cef686e5fdb4793365afee7162fe3c" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.761189 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.761233 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2c5v\" (UniqueName: \"kubernetes.io/projected/c8567de3-c2ba-4a96-be99-b81a2883439c-kube-api-access-t2c5v\") on node \"crc\" DevicePath \"\"" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.761246 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8567de3-c2ba-4a96-be99-b81a2883439c-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.770626 4810 scope.go:117] "RemoveContainer" containerID="c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4" Jan 10 07:24:02 crc kubenswrapper[4810]: E0110 07:24:02.771081 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4\": container with ID starting with c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4 not found: ID does not exist" containerID="c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.771184 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4"} err="failed to get container status \"c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4\": rpc error: code = NotFound desc = could not find container \"c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4\": container with ID starting with c2be8bc8a8989e40cd4e7a631e86f9650cf8a2a201ff4aad8eab659943b285a4 not found: ID does not exist" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.771320 4810 scope.go:117] "RemoveContainer" containerID="6897aaa27ff4dbca72904f8b16d7c8b17ad3cc5b91db8bef14729268a6fda071" Jan 10 07:24:02 crc kubenswrapper[4810]: E0110 07:24:02.771785 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6897aaa27ff4dbca72904f8b16d7c8b17ad3cc5b91db8bef14729268a6fda071\": container with ID starting with 6897aaa27ff4dbca72904f8b16d7c8b17ad3cc5b91db8bef14729268a6fda071 not found: ID does not exist" containerID="6897aaa27ff4dbca72904f8b16d7c8b17ad3cc5b91db8bef14729268a6fda071" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.771829 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6897aaa27ff4dbca72904f8b16d7c8b17ad3cc5b91db8bef14729268a6fda071"} err="failed to get container status \"6897aaa27ff4dbca72904f8b16d7c8b17ad3cc5b91db8bef14729268a6fda071\": rpc error: code = NotFound desc = could not find container \"6897aaa27ff4dbca72904f8b16d7c8b17ad3cc5b91db8bef14729268a6fda071\": container with ID starting with 6897aaa27ff4dbca72904f8b16d7c8b17ad3cc5b91db8bef14729268a6fda071 not found: ID does not exist" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.771861 4810 scope.go:117] "RemoveContainer" containerID="0cc3912272d75797b2f4a4d1cd6754fd18cef686e5fdb4793365afee7162fe3c" Jan 10 07:24:02 crc kubenswrapper[4810]: E0110 07:24:02.772245 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cc3912272d75797b2f4a4d1cd6754fd18cef686e5fdb4793365afee7162fe3c\": container with ID starting with 0cc3912272d75797b2f4a4d1cd6754fd18cef686e5fdb4793365afee7162fe3c not found: ID does not exist" containerID="0cc3912272d75797b2f4a4d1cd6754fd18cef686e5fdb4793365afee7162fe3c" Jan 10 07:24:02 crc kubenswrapper[4810]: I0110 07:24:02.772321 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cc3912272d75797b2f4a4d1cd6754fd18cef686e5fdb4793365afee7162fe3c"} err="failed to get container status \"0cc3912272d75797b2f4a4d1cd6754fd18cef686e5fdb4793365afee7162fe3c\": rpc error: code = NotFound desc = could not find container \"0cc3912272d75797b2f4a4d1cd6754fd18cef686e5fdb4793365afee7162fe3c\": container with ID starting with 0cc3912272d75797b2f4a4d1cd6754fd18cef686e5fdb4793365afee7162fe3c not found: ID does not exist" Jan 10 07:24:03 crc kubenswrapper[4810]: I0110 07:24:03.000691 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfpwx"] Jan 10 07:24:03 crc kubenswrapper[4810]: I0110 07:24:03.006759 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dfpwx"] Jan 10 07:24:03 crc kubenswrapper[4810]: I0110 07:24:03.705144 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8567de3-c2ba-4a96-be99-b81a2883439c" path="/var/lib/kubelet/pods/c8567de3-c2ba-4a96-be99-b81a2883439c/volumes" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.150024 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hgwl6"] Jan 10 07:24:07 crc kubenswrapper[4810]: E0110 07:24:07.150568 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8567de3-c2ba-4a96-be99-b81a2883439c" containerName="registry-server" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.150584 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8567de3-c2ba-4a96-be99-b81a2883439c" containerName="registry-server" Jan 10 07:24:07 crc kubenswrapper[4810]: E0110 07:24:07.150612 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8567de3-c2ba-4a96-be99-b81a2883439c" containerName="extract-utilities" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.150619 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8567de3-c2ba-4a96-be99-b81a2883439c" containerName="extract-utilities" Jan 10 07:24:07 crc kubenswrapper[4810]: E0110 07:24:07.150633 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8567de3-c2ba-4a96-be99-b81a2883439c" containerName="extract-content" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.150640 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8567de3-c2ba-4a96-be99-b81a2883439c" containerName="extract-content" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.150732 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8567de3-c2ba-4a96-be99-b81a2883439c" containerName="registry-server" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.153069 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.169028 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgwl6"] Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.321104 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-catalog-content\") pod \"certified-operators-hgwl6\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.321281 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-utilities\") pod \"certified-operators-hgwl6\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.321324 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq5r8\" (UniqueName: \"kubernetes.io/projected/d7993906-3812-4ffd-b90d-3760a22e26bd-kube-api-access-rq5r8\") pod \"certified-operators-hgwl6\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.422555 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-catalog-content\") pod \"certified-operators-hgwl6\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.422669 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-utilities\") pod \"certified-operators-hgwl6\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.422689 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq5r8\" (UniqueName: \"kubernetes.io/projected/d7993906-3812-4ffd-b90d-3760a22e26bd-kube-api-access-rq5r8\") pod \"certified-operators-hgwl6\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.423216 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-catalog-content\") pod \"certified-operators-hgwl6\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.423245 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-utilities\") pod \"certified-operators-hgwl6\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.441391 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq5r8\" (UniqueName: \"kubernetes.io/projected/d7993906-3812-4ffd-b90d-3760a22e26bd-kube-api-access-rq5r8\") pod \"certified-operators-hgwl6\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.475268 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:07 crc kubenswrapper[4810]: I0110 07:24:07.933297 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgwl6"] Jan 10 07:24:08 crc kubenswrapper[4810]: I0110 07:24:08.703418 4810 generic.go:334] "Generic (PLEG): container finished" podID="d7993906-3812-4ffd-b90d-3760a22e26bd" containerID="e28494bfd51a6725f41e8344049c8be8293770941fe567b5c4cc71df4694762b" exitCode=0 Jan 10 07:24:08 crc kubenswrapper[4810]: I0110 07:24:08.703474 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgwl6" event={"ID":"d7993906-3812-4ffd-b90d-3760a22e26bd","Type":"ContainerDied","Data":"e28494bfd51a6725f41e8344049c8be8293770941fe567b5c4cc71df4694762b"} Jan 10 07:24:08 crc kubenswrapper[4810]: I0110 07:24:08.703505 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgwl6" event={"ID":"d7993906-3812-4ffd-b90d-3760a22e26bd","Type":"ContainerStarted","Data":"ece07d677090413b1c4ce0a290035a5d2b943703cbd0aec5e371935b18fa279b"} Jan 10 07:24:09 crc kubenswrapper[4810]: I0110 07:24:09.710562 4810 generic.go:334] "Generic (PLEG): container finished" podID="d7993906-3812-4ffd-b90d-3760a22e26bd" containerID="f1d2c78d7abe788b1e4b813739cbb2b61d73d7409bbda7d935ec3f8fcf574ad1" exitCode=0 Jan 10 07:24:09 crc kubenswrapper[4810]: I0110 07:24:09.710662 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgwl6" event={"ID":"d7993906-3812-4ffd-b90d-3760a22e26bd","Type":"ContainerDied","Data":"f1d2c78d7abe788b1e4b813739cbb2b61d73d7409bbda7d935ec3f8fcf574ad1"} Jan 10 07:24:10 crc kubenswrapper[4810]: I0110 07:24:10.718722 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgwl6" event={"ID":"d7993906-3812-4ffd-b90d-3760a22e26bd","Type":"ContainerStarted","Data":"2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d"} Jan 10 07:24:10 crc kubenswrapper[4810]: I0110 07:24:10.738097 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hgwl6" podStartSLOduration=2.310517633 podStartE2EDuration="3.738080002s" podCreationTimestamp="2026-01-10 07:24:07 +0000 UTC" firstStartedPulling="2026-01-10 07:24:08.705954902 +0000 UTC m=+2277.321447785" lastFinishedPulling="2026-01-10 07:24:10.133517261 +0000 UTC m=+2278.749010154" observedRunningTime="2026-01-10 07:24:10.736854602 +0000 UTC m=+2279.352347505" watchObservedRunningTime="2026-01-10 07:24:10.738080002 +0000 UTC m=+2279.353572885" Jan 10 07:24:17 crc kubenswrapper[4810]: I0110 07:24:17.476046 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:17 crc kubenswrapper[4810]: I0110 07:24:17.478565 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:17 crc kubenswrapper[4810]: I0110 07:24:17.519933 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:17 crc kubenswrapper[4810]: I0110 07:24:17.807489 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:17 crc kubenswrapper[4810]: I0110 07:24:17.864224 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgwl6"] Jan 10 07:24:19 crc kubenswrapper[4810]: I0110 07:24:19.769808 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hgwl6" podUID="d7993906-3812-4ffd-b90d-3760a22e26bd" containerName="registry-server" containerID="cri-o://2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d" gracePeriod=2 Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.134273 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.313583 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq5r8\" (UniqueName: \"kubernetes.io/projected/d7993906-3812-4ffd-b90d-3760a22e26bd-kube-api-access-rq5r8\") pod \"d7993906-3812-4ffd-b90d-3760a22e26bd\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.313671 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-catalog-content\") pod \"d7993906-3812-4ffd-b90d-3760a22e26bd\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.313740 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-utilities\") pod \"d7993906-3812-4ffd-b90d-3760a22e26bd\" (UID: \"d7993906-3812-4ffd-b90d-3760a22e26bd\") " Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.314787 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-utilities" (OuterVolumeSpecName: "utilities") pod "d7993906-3812-4ffd-b90d-3760a22e26bd" (UID: "d7993906-3812-4ffd-b90d-3760a22e26bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.320781 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7993906-3812-4ffd-b90d-3760a22e26bd-kube-api-access-rq5r8" (OuterVolumeSpecName: "kube-api-access-rq5r8") pod "d7993906-3812-4ffd-b90d-3760a22e26bd" (UID: "d7993906-3812-4ffd-b90d-3760a22e26bd"). InnerVolumeSpecName "kube-api-access-rq5r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.364526 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7993906-3812-4ffd-b90d-3760a22e26bd" (UID: "d7993906-3812-4ffd-b90d-3760a22e26bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.416011 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.416091 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7993906-3812-4ffd-b90d-3760a22e26bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.416119 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq5r8\" (UniqueName: \"kubernetes.io/projected/d7993906-3812-4ffd-b90d-3760a22e26bd-kube-api-access-rq5r8\") on node \"crc\" DevicePath \"\"" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.778668 4810 generic.go:334] "Generic (PLEG): container finished" podID="d7993906-3812-4ffd-b90d-3760a22e26bd" containerID="2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d" exitCode=0 Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.778715 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgwl6" event={"ID":"d7993906-3812-4ffd-b90d-3760a22e26bd","Type":"ContainerDied","Data":"2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d"} Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.778727 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgwl6" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.778752 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgwl6" event={"ID":"d7993906-3812-4ffd-b90d-3760a22e26bd","Type":"ContainerDied","Data":"ece07d677090413b1c4ce0a290035a5d2b943703cbd0aec5e371935b18fa279b"} Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.778778 4810 scope.go:117] "RemoveContainer" containerID="2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.794565 4810 scope.go:117] "RemoveContainer" containerID="f1d2c78d7abe788b1e4b813739cbb2b61d73d7409bbda7d935ec3f8fcf574ad1" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.809503 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgwl6"] Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.809633 4810 scope.go:117] "RemoveContainer" containerID="e28494bfd51a6725f41e8344049c8be8293770941fe567b5c4cc71df4694762b" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.813134 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hgwl6"] Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.831780 4810 scope.go:117] "RemoveContainer" containerID="2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d" Jan 10 07:24:20 crc kubenswrapper[4810]: E0110 07:24:20.832367 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d\": container with ID starting with 2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d not found: ID does not exist" containerID="2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.832422 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d"} err="failed to get container status \"2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d\": rpc error: code = NotFound desc = could not find container \"2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d\": container with ID starting with 2781a60d18b59b59fc782320f3d33a40b57146791648cd363ae7697a0897142d not found: ID does not exist" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.832456 4810 scope.go:117] "RemoveContainer" containerID="f1d2c78d7abe788b1e4b813739cbb2b61d73d7409bbda7d935ec3f8fcf574ad1" Jan 10 07:24:20 crc kubenswrapper[4810]: E0110 07:24:20.832891 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d2c78d7abe788b1e4b813739cbb2b61d73d7409bbda7d935ec3f8fcf574ad1\": container with ID starting with f1d2c78d7abe788b1e4b813739cbb2b61d73d7409bbda7d935ec3f8fcf574ad1 not found: ID does not exist" containerID="f1d2c78d7abe788b1e4b813739cbb2b61d73d7409bbda7d935ec3f8fcf574ad1" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.832920 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d2c78d7abe788b1e4b813739cbb2b61d73d7409bbda7d935ec3f8fcf574ad1"} err="failed to get container status \"f1d2c78d7abe788b1e4b813739cbb2b61d73d7409bbda7d935ec3f8fcf574ad1\": rpc error: code = NotFound desc = could not find container \"f1d2c78d7abe788b1e4b813739cbb2b61d73d7409bbda7d935ec3f8fcf574ad1\": container with ID starting with f1d2c78d7abe788b1e4b813739cbb2b61d73d7409bbda7d935ec3f8fcf574ad1 not found: ID does not exist" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.832940 4810 scope.go:117] "RemoveContainer" containerID="e28494bfd51a6725f41e8344049c8be8293770941fe567b5c4cc71df4694762b" Jan 10 07:24:20 crc kubenswrapper[4810]: E0110 07:24:20.834030 4810 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e28494bfd51a6725f41e8344049c8be8293770941fe567b5c4cc71df4694762b\": container with ID starting with e28494bfd51a6725f41e8344049c8be8293770941fe567b5c4cc71df4694762b not found: ID does not exist" containerID="e28494bfd51a6725f41e8344049c8be8293770941fe567b5c4cc71df4694762b" Jan 10 07:24:20 crc kubenswrapper[4810]: I0110 07:24:20.834080 4810 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e28494bfd51a6725f41e8344049c8be8293770941fe567b5c4cc71df4694762b"} err="failed to get container status \"e28494bfd51a6725f41e8344049c8be8293770941fe567b5c4cc71df4694762b\": rpc error: code = NotFound desc = could not find container \"e28494bfd51a6725f41e8344049c8be8293770941fe567b5c4cc71df4694762b\": container with ID starting with e28494bfd51a6725f41e8344049c8be8293770941fe567b5c4cc71df4694762b not found: ID does not exist" Jan 10 07:24:21 crc kubenswrapper[4810]: I0110 07:24:21.704647 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7993906-3812-4ffd-b90d-3760a22e26bd" path="/var/lib/kubelet/pods/d7993906-3812-4ffd-b90d-3760a22e26bd/volumes" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.249440 4810 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ckh4h"] Jan 10 07:24:54 crc kubenswrapper[4810]: E0110 07:24:54.254071 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7993906-3812-4ffd-b90d-3760a22e26bd" containerName="extract-utilities" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.257901 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7993906-3812-4ffd-b90d-3760a22e26bd" containerName="extract-utilities" Jan 10 07:24:54 crc kubenswrapper[4810]: E0110 07:24:54.258067 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7993906-3812-4ffd-b90d-3760a22e26bd" containerName="registry-server" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.258136 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7993906-3812-4ffd-b90d-3760a22e26bd" containerName="registry-server" Jan 10 07:24:54 crc kubenswrapper[4810]: E0110 07:24:54.258234 4810 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7993906-3812-4ffd-b90d-3760a22e26bd" containerName="extract-content" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.258315 4810 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7993906-3812-4ffd-b90d-3760a22e26bd" containerName="extract-content" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.259548 4810 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7993906-3812-4ffd-b90d-3760a22e26bd" containerName="registry-server" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.261326 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.272772 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ckh4h"] Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.302469 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-catalog-content\") pod \"redhat-operators-ckh4h\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.302845 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-utilities\") pod \"redhat-operators-ckh4h\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.302944 4810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxs2g\" (UniqueName: \"kubernetes.io/projected/4e7a8289-6650-4d64-9948-4d885836a9c5-kube-api-access-pxs2g\") pod \"redhat-operators-ckh4h\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.403818 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-utilities\") pod \"redhat-operators-ckh4h\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.404109 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxs2g\" (UniqueName: \"kubernetes.io/projected/4e7a8289-6650-4d64-9948-4d885836a9c5-kube-api-access-pxs2g\") pod \"redhat-operators-ckh4h\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.404270 4810 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-catalog-content\") pod \"redhat-operators-ckh4h\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.404439 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-utilities\") pod \"redhat-operators-ckh4h\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.404690 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-catalog-content\") pod \"redhat-operators-ckh4h\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.426410 4810 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxs2g\" (UniqueName: \"kubernetes.io/projected/4e7a8289-6650-4d64-9948-4d885836a9c5-kube-api-access-pxs2g\") pod \"redhat-operators-ckh4h\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.579113 4810 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.967643 4810 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ckh4h"] Jan 10 07:24:54 crc kubenswrapper[4810]: I0110 07:24:54.991468 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckh4h" event={"ID":"4e7a8289-6650-4d64-9948-4d885836a9c5","Type":"ContainerStarted","Data":"5497579a1699ad23425bd528c366bbb69f5ecbfbbfbf9a14072bef5d2ef54050"} Jan 10 07:24:55 crc kubenswrapper[4810]: I0110 07:24:55.999519 4810 generic.go:334] "Generic (PLEG): container finished" podID="4e7a8289-6650-4d64-9948-4d885836a9c5" containerID="5796b580436480e7072e93664560d51ea50534fa603bdf54544df035a7611ff3" exitCode=0 Jan 10 07:24:56 crc kubenswrapper[4810]: I0110 07:24:55.999692 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckh4h" event={"ID":"4e7a8289-6650-4d64-9948-4d885836a9c5","Type":"ContainerDied","Data":"5796b580436480e7072e93664560d51ea50534fa603bdf54544df035a7611ff3"} Jan 10 07:24:58 crc kubenswrapper[4810]: I0110 07:24:58.016085 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckh4h" event={"ID":"4e7a8289-6650-4d64-9948-4d885836a9c5","Type":"ContainerStarted","Data":"037aed9e824355cdb1c01b33ac92a0ac2ed265ebb820c3e4aeec4a2013b4381c"} Jan 10 07:24:59 crc kubenswrapper[4810]: I0110 07:24:59.028059 4810 generic.go:334] "Generic (PLEG): container finished" podID="4e7a8289-6650-4d64-9948-4d885836a9c5" containerID="037aed9e824355cdb1c01b33ac92a0ac2ed265ebb820c3e4aeec4a2013b4381c" exitCode=0 Jan 10 07:24:59 crc kubenswrapper[4810]: I0110 07:24:59.028148 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckh4h" event={"ID":"4e7a8289-6650-4d64-9948-4d885836a9c5","Type":"ContainerDied","Data":"037aed9e824355cdb1c01b33ac92a0ac2ed265ebb820c3e4aeec4a2013b4381c"} Jan 10 07:25:02 crc kubenswrapper[4810]: I0110 07:25:02.050379 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckh4h" event={"ID":"4e7a8289-6650-4d64-9948-4d885836a9c5","Type":"ContainerStarted","Data":"2fe8193e09ff5d541c3cfee663c70f2436c0cb3b6e6bbc216c8ce69113c42408"} Jan 10 07:25:02 crc kubenswrapper[4810]: I0110 07:25:02.069647 4810 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ckh4h" podStartSLOduration=3.672660407 podStartE2EDuration="8.069630461s" podCreationTimestamp="2026-01-10 07:24:54 +0000 UTC" firstStartedPulling="2026-01-10 07:24:56.001185342 +0000 UTC m=+2324.616678225" lastFinishedPulling="2026-01-10 07:25:00.398155366 +0000 UTC m=+2329.013648279" observedRunningTime="2026-01-10 07:25:02.066108466 +0000 UTC m=+2330.681601399" watchObservedRunningTime="2026-01-10 07:25:02.069630461 +0000 UTC m=+2330.685123344" Jan 10 07:25:04 crc kubenswrapper[4810]: I0110 07:25:04.579826 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:25:04 crc kubenswrapper[4810]: I0110 07:25:04.580210 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:25:05 crc kubenswrapper[4810]: I0110 07:25:05.621103 4810 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ckh4h" podUID="4e7a8289-6650-4d64-9948-4d885836a9c5" containerName="registry-server" probeResult="failure" output=< Jan 10 07:25:05 crc kubenswrapper[4810]: timeout: failed to connect service ":50051" within 1s Jan 10 07:25:05 crc kubenswrapper[4810]: > Jan 10 07:25:14 crc kubenswrapper[4810]: I0110 07:25:14.625021 4810 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:25:14 crc kubenswrapper[4810]: I0110 07:25:14.664984 4810 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:25:14 crc kubenswrapper[4810]: I0110 07:25:14.857695 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ckh4h"] Jan 10 07:25:16 crc kubenswrapper[4810]: I0110 07:25:16.129856 4810 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ckh4h" podUID="4e7a8289-6650-4d64-9948-4d885836a9c5" containerName="registry-server" containerID="cri-o://2fe8193e09ff5d541c3cfee663c70f2436c0cb3b6e6bbc216c8ce69113c42408" gracePeriod=2 Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.157773 4810 generic.go:334] "Generic (PLEG): container finished" podID="4e7a8289-6650-4d64-9948-4d885836a9c5" containerID="2fe8193e09ff5d541c3cfee663c70f2436c0cb3b6e6bbc216c8ce69113c42408" exitCode=0 Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.157864 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckh4h" event={"ID":"4e7a8289-6650-4d64-9948-4d885836a9c5","Type":"ContainerDied","Data":"2fe8193e09ff5d541c3cfee663c70f2436c0cb3b6e6bbc216c8ce69113c42408"} Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.274311 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.369316 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-utilities\") pod \"4e7a8289-6650-4d64-9948-4d885836a9c5\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.369380 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-catalog-content\") pod \"4e7a8289-6650-4d64-9948-4d885836a9c5\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.369437 4810 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxs2g\" (UniqueName: \"kubernetes.io/projected/4e7a8289-6650-4d64-9948-4d885836a9c5-kube-api-access-pxs2g\") pod \"4e7a8289-6650-4d64-9948-4d885836a9c5\" (UID: \"4e7a8289-6650-4d64-9948-4d885836a9c5\") " Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.370317 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-utilities" (OuterVolumeSpecName: "utilities") pod "4e7a8289-6650-4d64-9948-4d885836a9c5" (UID: "4e7a8289-6650-4d64-9948-4d885836a9c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.374551 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e7a8289-6650-4d64-9948-4d885836a9c5-kube-api-access-pxs2g" (OuterVolumeSpecName: "kube-api-access-pxs2g") pod "4e7a8289-6650-4d64-9948-4d885836a9c5" (UID: "4e7a8289-6650-4d64-9948-4d885836a9c5"). InnerVolumeSpecName "kube-api-access-pxs2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.471248 4810 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.471291 4810 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxs2g\" (UniqueName: \"kubernetes.io/projected/4e7a8289-6650-4d64-9948-4d885836a9c5-kube-api-access-pxs2g\") on node \"crc\" DevicePath \"\"" Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.503802 4810 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e7a8289-6650-4d64-9948-4d885836a9c5" (UID: "4e7a8289-6650-4d64-9948-4d885836a9c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.572827 4810 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e7a8289-6650-4d64-9948-4d885836a9c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.883488 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:25:20 crc kubenswrapper[4810]: I0110 07:25:20.883815 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 10 07:25:21 crc kubenswrapper[4810]: I0110 07:25:21.168424 4810 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ckh4h" event={"ID":"4e7a8289-6650-4d64-9948-4d885836a9c5","Type":"ContainerDied","Data":"5497579a1699ad23425bd528c366bbb69f5ecbfbbfbf9a14072bef5d2ef54050"} Jan 10 07:25:21 crc kubenswrapper[4810]: I0110 07:25:21.168494 4810 scope.go:117] "RemoveContainer" containerID="2fe8193e09ff5d541c3cfee663c70f2436c0cb3b6e6bbc216c8ce69113c42408" Jan 10 07:25:21 crc kubenswrapper[4810]: I0110 07:25:21.168669 4810 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ckh4h" Jan 10 07:25:21 crc kubenswrapper[4810]: I0110 07:25:21.193771 4810 scope.go:117] "RemoveContainer" containerID="037aed9e824355cdb1c01b33ac92a0ac2ed265ebb820c3e4aeec4a2013b4381c" Jan 10 07:25:21 crc kubenswrapper[4810]: I0110 07:25:21.211071 4810 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ckh4h"] Jan 10 07:25:21 crc kubenswrapper[4810]: I0110 07:25:21.215972 4810 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ckh4h"] Jan 10 07:25:21 crc kubenswrapper[4810]: I0110 07:25:21.221723 4810 scope.go:117] "RemoveContainer" containerID="5796b580436480e7072e93664560d51ea50534fa603bdf54544df035a7611ff3" Jan 10 07:25:21 crc kubenswrapper[4810]: I0110 07:25:21.702746 4810 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e7a8289-6650-4d64-9948-4d885836a9c5" path="/var/lib/kubelet/pods/4e7a8289-6650-4d64-9948-4d885836a9c5/volumes" Jan 10 07:25:34 crc kubenswrapper[4810]: I0110 07:25:34.314437 4810 scope.go:117] "RemoveContainer" containerID="704cd4c95e8b13ef872271d0b9d842a153414dd0b19883307b66b1367d27709a" Jan 10 07:25:34 crc kubenswrapper[4810]: I0110 07:25:34.348044 4810 scope.go:117] "RemoveContainer" containerID="2aa7b7515f6d6c1c97057f6afde83d505200ba984e14927533c16fee2049c0ab" Jan 10 07:25:50 crc kubenswrapper[4810]: I0110 07:25:50.883521 4810 patch_prober.go:28] interesting pod/machine-config-daemon-8c5qp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 10 07:25:50 crc kubenswrapper[4810]: I0110 07:25:50.885081 4810 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-8c5qp" podUID="b5b79429-9259-412f-bab8-27865ab7029b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515130377424024453 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015130377425017371 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015130372352016506 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015130372352015456 5ustar corecore